An idea that has been floating through the news in recent years is Edge Computing. One flavor of the concept is to have relatively lightweight client devices that receive pre-processed data from servers that are located close to the edge of the network instead of doing it themselves. Recently, I’ve come across two applications that would fit the bill and that I could experiment with a bit: Cloud Gaming and immersive Virtual Reality.
The idea behind cloud gaming is to run a game that requires significant CPU and GPU power on a server somewhere in the network instead of on a local PC, and stream the video output to the player over the network. This could be appealing for occasional gamers who don’t have a high end gaming PC at home that costs several thousands of euros. There are quite a number of such cloud gaming services today and I chose to have a closer look at Geforce Now.
As I’m not a gamer, I didn’t want to buy one of the latest high end games, but instead chose to go for “Star Conflict”, a somewhat older 3D space shooter game that was available for free. From a video output streaming point of view, there should be little difference. Geforce Now offers two ways to play games in the cloud: Either with a dedicated app (Mac and Windows) or in the browser. I chose to try the later, as I use Linux on my notebook. As you might imagine, streaming the video output from the network based server to the local device requires a fast Internet connection. As advertised, the average data rate while playing the game easily exceeded 35 Mbps,.
In the output of tcpdump I ran while playing the game, I could see that the video output was streamed to my notebook in UDP packets with a size of around 1000 bytes. Using UDP makes sense, as there is little if any time to repeat any dropped packets. I played the game over my 100 Mbps VDSL link and Wi-Fi at home, and the round trip time to the server was around 60 ms. Obviously, this is not quite edge computing. While I was in Germany, it looked like the server was somewhere in the UK. Still, I didn’t notice much of a lag while playing the game. ‘Real’ gamers would probably like to have the servers closer to their ‘edge’, as interaction with other players with such a delay might impact the experience.
The second edge computing application I recently came across was in the “Miniatur Wunderland” in Hamburg. Here, they recently started to offer a virtual reality experience that ‘shrinks’ you into their miniature world and you can then go on and explore the place. A Virtual Reality headset is used to ‘transport’ up to 8 visitors simultaneously into the virtual world. Location sensor on the head, hands and feet return feedback to the system of the location and motion of the 8 people, which allows all participants to ‘see’ the avatars of the other people in the virtual world and to interact with them. Also, the sensors required to be able to walk around in the virtual world. While there must be some sort of central server to exchange the location information between the participants, I suppose that the graphical processing, which has to be done independently for each of the 8 participants, is mostly done in the ‘computing’ backpack everybody wears. The backpack weights seven kilograms and the battery lasts about 20 minutes and has to be exchanged once during the excursion. That kind of gives one an idea of how much power the computing equipment in the backpack draws.
The next step would obviously be to move most of the compute power of the backpack to edge servers and stream the virtual world to participants over the network. And here, a 60 milliseconds delay might be too much, as the feedback is not for the press of a button, but for walking through a world, turning one’s head, and so on. Also, VR gaming lives from the resolution of the screens in the goggles, so I can imagine that the overall datarate to each participant from the edge of the network will even be higher than the 35 Mbps of the previous example. But it’s not impossible. With 5G on the 3.5 GHz band n78 or 802.11ax Wi-Fi, gigabit speeds can easily be reached, particularly over short distances. Also, the power requirements for receiving and displaying a video stream to each eye is probably much lower than locally computing the virtual world.
So yes, it is quite obvious where this will go from here: Better and better virtual reality worlds in which people can move around, computed on edge servers, smaller and smaller VR headsets, and less and less additional hardware required on the person that immerses into the virtual world.