The 360-degree video transmission offers an immersive experience to viewers and is an integral part of several applications such as Metaverse. Ultra-High Definition (UHD) or greater resolutions for such content requires a substantially higher bitrate for transmission even when encoded using the latest codecs. In this work, we propose a machine learning based adaptive UHD 360° immersive video streaming solution, MAIVS, that reduces the data rate requirement to stream the high resolution 360-degree immersive videos. We divide the videos spatially into motion constrained tiles (MCTS), encode (using HEVC), and package them into mp4 containers at different quality levels. We train a Deep Neural Network (DNN) model for each segment of the video to upscale (at client) it to a higher resolution. We use the DASH (dynamic adaptive streaming over HTTP) framework for streaming the video tiles and the model parameters in a progressive manner. The tiles directly in the viewers Field of View (FoV) are streamed at the highest possible quality while a lower resolution is used for the other tiles. We use video quality parameter (PSNR), buffer conditions, and available network bandwidth, as feedback to train the Deep Q-network (DQN) and selectively pack the bitrate quality segments accordingly. Overall, by using reinforcement learning in our proposed MAIVS framework, we improve the client-side PSNR while reducing the bitrate requirement for streaming high resolution (UHD and higher) 360° videos over the internet.