For an industry that quite literally relies on speed and efficiency, growing capability in big data and networking has raised the already-high stakes for Red Bull’s Formula 1 racing team.
Though the masses were drawn to the Grand Prix racetrack at Albert Park in Melbourne last weekend, behind the roar of the stadium was a very meticulous collaboration from Red Bull Racing’s 700-strong team based entirely on real-time data feeds.
Red Bull’s latest car, RB12, contains around 100 strategically placed sensors to monitor the performance of some 10,000 unique components including wind force, tyre pressure, fuel burn and brake temperature, with subsequent telemetry data constantly feeding into the team’s global unified network.
Using data gathering techniques, Red Bull Racing (RBR) was able to design, manufacture and build a prototype of RB12 in just five months, test the brand new multi-million dollar car in the space of eight days, and track every nook and cranny of the vehicle to ensure optimum performance. This was combined with other known factors such as driver, weather and the specific track in order to pre-empt how a race will play out.
The ability to gather, analyse and act on data in real-time, both in testing cycles and during a live race, is essential for staying competitive in an industry where a fraction of a second can change the outcome completely.
RED BULL RACING TEAM
Total races: 202
Pole positions: 57
Planning for success
Collecting data on the RB12 design and function, which is constantly being altered, helps the team determine how the car will fare on race day.
“Races are won long before the car reaches the racetrack”, says Alan Peasland, RBR’s head of technical partnership, tells CIO Australia. “But to finish first, first you must finish.”
Race weekends are typically manic, with the need to have designed, developed and tested a whole new car before even reaching the track.
“We need to test it and upgrade it continually. We have to plan for many global and logistical challenges, and we need a subset of everything we can do in our UK headquarters on the ground, by the track,” Peasland says.
For each race, 30,000kg of freight must be shipped to the race location, including cars weighing 700kg each, garage equipment, and IT infrastructure. In 2016, this will be done for 21 races across multiple continents, over a nine-month period, with a race approximately every two weeks.
Further, the Fédération Internationale de l'Automobile (FIA) has strict rules about the number of team personnel allowed on-site, so on race weekends Red Bull has to make do with 60 engineers on the track. This includes the technical crew in the pit, plus a number of IT workers, mechanics, plus broadcast and engineering crews working in the garage.
Using high-bandwidth networking and communication services provided by AT&T, the team then extends its capability to an additional 30 engineers watching on from RBR headquarters in Milton Keynes, England – also referred to as ‘the factory’.
Just like RBR with their cars, the AT&T team must thoroughly test the network and solve problems before they occur, ensuring everything is up and running ahead of all the race weekends, so Red Bull Racing can simply ‘plug and play’ when they arrive at the track.
The whole team then utilises its global connectivity to collaborate in the days leading up to the race. A mobile hub is created at each track location containing row after row of monitors with endless flows data for onsite engineers to scour through.
New pressures and parts
In the three days prior to race day, cars are tested, fitted out, monitored and if necessary, rebuilt to improve on the data and suit the conditions of the race. For each event, the team must plan and design a new front and rear wing, new suspension, as well as constantly tweaking the remaining components of the car for the circuit.
“The big challenge for us is the technical regulations that go with the sport, every year they change,” says Peasland.
“We’ve gone from 19 races last year to 21 this year, which changes the whole demand on the reliability and life of the parts and the engine. We have less testing time this year too, and that makes gathering information off the car for the race weekend quickly and reliably all the more important.”
A huge challenge in 2014 came with regulation change requiring the replacement of the previous 2.4 litre V8 engine with a 1.6 litre turbocharged V6 hybrid engine, meaning the team had to start from scratch with its design. This year the introduction of a new tyre type has also created additional strain as it put forth new specifications that needed to be evaluated and tested.
“Everything has its own life and performance characteristics, so we need to understand that as much as possible. We’ve got to get data on that to see how it performs, with minimal track time, adding more dimensions and variables to figure out, so changing regulations really require you to plan ahead.”
The team relies heavily on the virtual environment provided by AT&T to collect and send larger volumes of data and communicate effectively.
Data first travels to the garage for the mechanics and engineers who see the information relevant to them. Information is analysed in ‘the racks’ - a designated area that houses the team’s entire IT kit - with rows of monitors on which hundreds of gigabits of data is being scoured continually by on-site engineers and AT&T teams.
Data is also sent to the UK operations room via Ethernet VPN for additional support and analysis from 30 engineers, with less than 300 milliseconds of latency for the data from Albert Park track to Milton Keynes.
Because RB12 utilises a Renault engine, Renault also have an office in France monitoring the data from that power unit to support the RBR team and offer advice on how to manage that throughout the race weekend.
“That’s the critical bit. Staying connected means these people can do things in foreign offices that we can’t do at the track,” says Peasland.
Unsurprisingly, speed is a huge focus for RBR, as maintaining competitive edge can mean as little as a tenth of a second. Twice the team has claimed the world’s fastest pit stop, including the currently unbeaten time of 1.923 seconds in which 20 mechanics changed of all four car tyres in perfect choreography, mid-race.
Red Bull Racing's original world record pit stop time of 2.5 seconds at the Malaysian Grand Prix 2013
“That’s the best example of how we operate as a team and how we constantly strive for improvement, how we combine human performance with technology to achieve the best,” says Peasland.
“The key is how reliably and consistently can we do that. It can be just one tenth of a second that helps us get ahead, so it’s crucial.”
Race day challenges
Race day arrives and, hidden from the crowds, the RBR team monitors thousands of parameters on the racing car, allowing the technical staff to analyse all aspects of vehicle and driver performance and anticipate any issues before they occur.
Each individual sensor on the racing car sends real-time data to the team on a secure line, circulating from the garage to HQ and the pit using a trackside LAN and temporary garage connection point provided by AT&T.
“During a race, anything can happen…Being able to react and respond to on-track incidents, both at the track and in the UK, plays a vital role in determining the finish position and result of a race,” says Peasland.
Datasets on race day are tremendous, with race teams at the 2014 US Grand Prix collecting more than 243 terabytes of data, while RBR alone averages around 400GB every race weekend, according to AT&T.
The crew at the factory, in the garage and in the racks are constantly watching screens and passing on crucial information, but the final decisions are made at the pit wall on the very front lines, which includes RBR chief technical officer Adrian Newey, team leader Christian Horner, and a number of senior technical workers.
“It’s Adrian’s job to manage the race,” Peasland adds. “But Adrian will be making decisions based on input from strategy engineers, tyre engineers, performance engineers, all based in the UK.
“They’re all analysing and feeding through all the data, looking at the whole picture of the race and trying to come up with the best idea for when we should pit, tyre change, or attempt to overtake another driver.
“Ironically, the guys in the pit rarely even watch the car or the race; they’re just trawling through the data on their own monitors, with just a letterbox gap to see the race through.”
Often CTO Adrian won’t even be required on-site, often choosing to manage the race from the UK office. Following the race, any damage to the car or issues noted with any components, often the team is left with a mere week to come up with a solution, then design, manufacture and test it.
Indeed, during Melbourne’s race, RBR racer Daniil Kvyat suffered an “electronics gremlin” causing the car to shut down, which will need to be analysed and fixed in time for the next race in Bahrain.
Fellow racer, Australian Daniel Ricciardo, finished in fourth place just shy of a podium position, starting off in eighth position. Ricciardo also became the second after driver Mark Webber in 2012 to record a fastest lap at home.
“A great drive from Daniel at his home race, he drove a competitive race from start to finish,” team principal Christan Horner said in a statement following the event. “It was always a long shot for a podium [position] but he did everything possible today. It’s encouraging to see the pace of the car in the race trim.”
Future of F1 connectivity
Making decisions based on gut feelings are definitely a thing of the past for Formula 1, and into the future Peasland says RBR will continue to rely on AT&T’s networking and unified communications capability, having extended and expanded their contract for a second time in February this year.
“We rely on partnerships with AT&T to create the ecosystem of tech and innovation to fill the gaps on areas we aren’t the experts. They cover such a huge part of our IT portfolio now we simply couldn’t do it ourselves,” he says.
According to RBR CIO, Matt Cadieux, F1 will continue to move towards a more virtualized environment, with a reliable and resilient global network now essential for success. Peasland adds that though there are plans to reduce costs and the burden of managing a data centre, at this stage a move towards a shared services model is too challenging due to the level of super computer needed for the simulations the team runs.
“We have to do Computational Fluid Dynamics (CFD) simulations, wind tunnel runs, track simulations… we couldn’t move that sort of infrastructure, so we’ve got to host it in the UK and feed it the data,” he says. “I’m sure F1 will move in the direction of putting more data in the cloud and having faster access to that… right now I think we’re fairly comfortable with what we’ve got in terms of on-premise cloud.”
CIO Journalist Bonnie Gardiner attended the Red Bull Racing team garage at Albert Park in Melbourne in the build up to the 2016 Australian Grand Prix, as a guest of Red Bull Racing and AT&T.
Follow Bonnie on Twitter: @Bonneth
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.