Between November 16 and 18, 2021, it was held in Barcelona (within the Smart City
Expo 2021) a new edition of the Autonomous Driving Challenge, organized by Seat
(Volkswagen Group), and by the Institute of Industrial Robotics and Informatics (IRI-UPC).
The Autonomous Driving Challenge is an annual competition aimed at students with a technical background in robotics who are interested in developing the automotive technology of the future. Participants program fully automated driving functions and the necessary software architectures.
They work with 1:8 scale vehicles, developed exclusively for the competition. The cars
are equipped with a LIDAR sensor, ultrasound, cameras, a high-performance NVIDIA
graphics card, a CPU and ROS libraries.
Thanks to in-house developed software, the models should be able to drive themselves according to all rules and avoid obstacles.
This development is applicable and scalable to algorithms of real-life autonomous
driving and provides students with the ability to form part of the creation of that future concept of mobility.
The project began in February 2021, when the selected teams received the model
vehicle. The organization provided teaching materials, training sessions and technical talks by robotics experts to ensure that all participants have a common starting point.
The student teams spent nine months developing their software, which finally allowed cars to move autonomously.
On November 17 and 18, the Catalan Audiovisual Media Corporation (CCMA) made a
special 360º multi-camera live broadcast of the Autonomous Driving Challenge, which allowed users to move inside the circuit in a virtual way and see the tests in first person.
In addition, users were able to enjoy a unique TV production carried out by the CCMA together with students from the Thau School in Barcelona, who acted as a team of production and were responsible for narrating and broadcasting the tests.
ViVIM Project
This CCMA production was framed in the activities of ViVIM project (computer Vision
for Multiplatform Immersive Video), which aims to create a new audiovisual format
based on omnidirectional video that can be seen in virtual reality devices and also in tablets, smartphones or SmartTV.
The ViVIM project focuses on creating a new audiovisual format based on
omnidirectional video. This new format will offer end users a consistent audiovisual
experience across virtual reality devices, smartphones, tablets and SmartTV. The
project works on live and pre-recorded video production and consists of adapt both, technology, and storytelling to deploy all the possibilities of immersive screens.
The project aims to impact in the ecosystem of content creators, distributors, and
consumers. The project consortium partners are i2Cat, as coordinator, the Vision Center for Computer (CVC), the CCMA, Visyon and Eurecat.
CCMA participation in the project, through TV3, focuses on the contribution of the
technical and artistic requirements, as well as the definition of the tools that will
enhance the creation, distribution, and consumption of content immersive and
interactive audiovisuals.
Likewise, TV3 collaborates in the coordination, deployment, and execution of the pilots of the project, open to public consumption, which help to analyze and measure the results with the objective of maximizing the impact in the audiovisual sector.
5G Technologies
The contribution and broadcast of the event was done using 5G network, with the
collaboration of Vodafone, which provided the communication equipment and 5G
coverage in the pavilion of the Smart City Expo where the competition took place.
This coverage made it possible to make the contribution from three smartphones that acted as mobile cameras. Likewise, the distribution of the content produced, as well as the 360º cameras, was also carried out through the 5G network to access the cloud, from where it was live broadcast to all users.
The technical deployment
For the realization of the 360º multi-camera live of the Autonomous Car Competition in the Smart City, a technical deployment was designed based on the technology used in remote productions and in mobile journalism (MOJO).
The deployment had a total of 8 cameras:
3 smartphones equipped to broadcast via the 5G network.
2 Panasonic robotic cameras connected with the TV realization area.
1 overhead camera broadcasting through the 5G network.
2 360º cameras connected to broadcast server.
Capturing ambient audio was planned, as well as microphones for the interviews on mobile phones.
All the audiovisual capture was centered on a production system using software based on vMIX that received the different audio and video signals through the 5G network and, in the case of robotic cameras, by direct video connection.
Likewise, from the vMIX production system, the reception of the different data and
accessory content required to always offer the textual and graphical information on the screen.
The stream signal was sent from vMIX to the central server system, from where, along with the stream signals from the two 360º cameras, the content was produced and streamed to the multi-camera endpoint in the cloud thanks to Vodafone's 5G network.
Users were able to access these contents through the portal that the CCMA deployed in Internet.
All of this is possible thanks to the fact that, in the last two years, the evolution of
audiovisual production for external broadcasting has undergone very important in the way of producing live content, displacing the old systems from mobile production and broadcasting to new, much lighter production systems, flexible and quick to deploy, without compromising broadcast quality. 5G technologies are essential for the success of these remote productions and transmissions, where connectivity is a key parameter.
Comentarios