A Pattern of Intelligence: The MFC 500 Multifunction Camera

The modular camera platform newly developed by Chassis & Safety shows how much artificial intelligence is already in driver assistance systems today — and what will be possible in the next decade.

The MFC 500 is modular and scalable and can be tailored to any customer requirement, from the mono camera as a speed and lane keeping aid to the surround view and automated driving system with multiple camerasWithout technologies based on artificial intelligence and deep machine learning systems, nothing will work in the future – there are no longer two opinions on this in the automotive industry. The Chassis & Safety BU ADAS has taken this into account by recently opening the Competence Center for Deep Machine Learning in Budapest. The first ADAS products designed for deep machine learning systems are now ready for series production – for example, the new, modularly designed MFC 500 multifunctional camera.

“First of all, the fifth generation of our camera platform is of course the considerably expanded and improved version of its predecessor,” said Dr. Sascha Semmler, Head of Program Management Camera in the BU. The MFC 500 is modular and scalable and can be tailored to any customer requirement, from the mono camera as a speed and lane keeping aid to the surround view and automated driving system with multiple cameras. “It is by definition designed for applications that will place enormous demands on data processing in five to ten years’ time,” added Semmler. “The platform is designed so that we can integrate far more powerful central computers than previously available, and it can process far more advanced algorithms for pose and gesture control or automated driving, for example.”


Assistance system with a seventh sense

Dr. Sascha Semmler, Head of Program Management Camera in the BUTo ensure this, the camera must meet much higher requirements for environment recognition than its previous versions. The system will not only have to identify static obstacles or moving objects on the road and then acknowledge them with warnings to the driver, brake commands or lane corrections. “Instead, the system must develop an understanding of the scene, a seventh sense for surprising, dangerous situations,” explained Semmler. The classic example: If a ball rolls onto the street, a child is guaranteed to follow it. In order to react to such situations, the assistance systems of the future must be able to form contexts and draw conclusions. However, such complex scenarios could no longer be programmed and mapped with manageable software codes.

“Instead, we work generically, in other words we create images and videos in which the road, zebra crossing, vehicles and obstacles are labelled. On the other hand, we create areas into which the vehicle can divert in an emergency: free lanes, hard shoulder, green zones,” he said. “We teach the system to react adequately using neural networks. They are conditioned in such a way that they establish relationships between different information and the system learns to carry out the necessary follow-up actions – safely steering or braking, for example.”


Computer performance multiplied

The technology can now be implemented as the result of the explosion in chip performance, according to Semmler. The chip industry has been supporting and driving forward the development of high-performance computers for several years – that was one of the door-openers. The triumphant advance of the central computer will also put an end to the proliferation of control units for a wide variety of applications in vehicles. “The number of control units has grown continuously over the last decades, and a consolidation will now follow,” Semmler observed. 

Centralized computers would also use computing capacity for assistance systems much more effectively than conventional ECUs, he said: “Parking operations do not need to be calculated on the highway, but in city traffic vehicles do not have to look hundreds of meters ahead. All their resources can therefore be used in relation to the respective driving situation. This is what humans already do by nature.”


Development according to the fail fast principle

Semmler said this scenario will become reality soon. This also applies to the development of a working method with which the "Instead, we work generically, in other words we create images and videos in which the road, zebra crossing, vehicles and obstacles are labelled", said Dr. Sascha Semmer.camera platform can be effectively expanded and refined in the coming years. “Developing in an ivory tower, generating millions of requirements and specifications, and at some point presenting a finished, absolutely mature product – this is not the way to develop software-supported functions,” explained Semmler. “Instead, we must follow the ‘fail fast’ principle: tackle difficult cases first, start over early when necessary, and try out as much as possible. This is the incremental approach nature teaches us.” In this context, Semmler points out how important intensive, cross-BU or divisional cooperation is for the further development of the MFC 500: “In addition to the new AI Center in Budapest, we are cooperating closely with colleagues from C&S Advanced Engineering, Automotive S&T and their AI Center, to name but a few. Because for good neural networks and well-connected vehicles, we need well-connected teams and people – this is our ultimate key to success.”

Downloads