
The general explanation:
What is happening right now?
We are on the premises of the Minimax Fire Protection Research Centre in Bad Oldesloe. We are meeting in the DRZ consortium to integrate all the robot systems from the consortium partners for a week. This means that each robot speaks the same language and the information is brought into one big common system. This way, every user should see what information the individual robot systems provide.
And what does that look like in particular?
Each partner has its own robotic equipment, from drones to ground robots in its special field, and integrating these different systems into a common system is the main task. The other main task is to visualise and present this for the users, i.e. for the fire brigade or the security forces, so that the target group can handle it in the field.
The interviews with our partners:
What is happening right now?
We are currently integrating our autonomous flying robots with the network provided by TU Dortmund University. The robots have WiFi and the TU Dortmund University takes care that enough access points are provided, that the data rate is correct and that enough traffic can be transmitted. We are currently connecting our robots to this system.
What hurdles have you already overcome in the project?
The most difficult hurdle in the project is the complexity of the systems, because a robot has tens of sensors and subsystems and getting them all to work at the same time is the most difficult thing. We are on the right track and have already overcome some of the hurdles.
What facilitation do you provide for real-life use?
We are responsible for the autonomous assistance functions and ensure that the flying robots assist the emergency services autonomously. This means that they avoid obstacles or fly independently to certain points. This supports the emergency forces in order to avoid wrong decisions based on stressful situations.
What exactly does autonomous mean for you?
Autonomous means that the robot independently performs certain tasks that a firefighter no longer has to do. For example, independently flying away from the scene, taking photos of the danger situation, recognising important details in the danger area (e.g. danger signs). Based on this data, the robot assesses the danger and transmits it to the emergency worker or the control centre. Full autonomy is not aimed at for legal and safety reasons. But the more autonomous work the robot can do for the operator, the better. In our project, however, there will still be an operator who monitors what the robot decides.
What is happening right now?
We are preparing to test our robot in a replicated, real environment. The environment is fogged with smoke and we want to see how this smoke affects the sensor data. From this data, we want to draw conclusions as to whether the data can still be used in the smoke.
What hurdles have you already overcome in the project?
We have built a completely new demonstrator (D4). This is intended for indoor purposes, i.e. for industrial sites, industrial halls. There it is supposed to patrol and detect fires as early as possible in order to directly prevent a larger fire. The robot is configurable via a modular concept. Due to its high speed (118 km/h) and the very high carrying capacity of payloads (100 kg), it can be used individually.
What facilitation do you provide for real-life use?
We are already trying to act preventively, in that the robot can detect early on where fires could occur (e.g. areas in industrial plants that are difficult to access) and report them. With this help, emergency services quickly have an overview of the situation and can intervene at an early stage. The robot is also able to extinguish small fires directly and independently.
What is happening right now?
We are currently testing the radar module we developed, which enables the robot to localise and navigate in smoky environments. The radar module is based on a radar signal, which enables the robot to see through the smoke and recognise the geometry in the room. This enables it to detect and avoid obstacles in the room. The robot is in the test room and the sensors are set up. First smoke is generated with a fog machine and then a smoke cartridge is ignited. Then we test how well the sensors can work in the smoky room.
What hurdles have you already overcome in the project?
We now have an egocentric representation of our robot situation images. This means that the robot understands its environment. It knows where, for example, walls or a fire are in its environment and can make decisions based on this. For example, the exploration of the environment, a message to the operator where there are dangers or even the creation of a 3D model of the environment.
- Interposed question: How did the robot learn that? Did he go to school?
Yes, you could say that. The robot has learned to understand its environment in 3D. It has various sensors with which it perceives its environment. We have dealt with how to process the sensor data obtained and the robot understands, on the one hand, that there are walls and floors here, that I can drive here, that there is an obstacle there that I cannot overcome or that there is an obstacle here where I have to use my joints and arms to overcome it.
The assistance functions are constantly being further developed. This enables an ever higher level of autonomy. In this way, the robot's ability to learn is constantly progressing.
- Interim question: What does autonomy mean to you?
Autonomy means that we place the robot in an environment and then go to "Start exploration". Then the robot independently explores the environment without any further intervention. With us, there is still someone who controls the robot and also experiences a certain transparency of the assistance functions. But he no longer has to intervene directly.
What facilitation do you provide for real-life use?
We want to prevent rescue forces from having to expose themselves to increased risks in dangerous, unclear situations. For example, the situation when a hazardous substance escapes and the emergency forces do not know what kind of hazardous substance it is, what quantity, where exactly it escaped or a building that is in danger of collapsing and it is not known whether there is still an injured person in the rubble. Then our goal is to send the robot ahead and it creates a situation picture of the surroundings. This way, it is no longer necessary to bring the emergency worker into the dangerous situation.
What is happening right now?
Here you can see the situation display that we are developing as part of the project. The situation display shows a map on which the entire operational situation is visualised. The data of the robots, but also the positions of the emergency forces, an overview of operation commands, orders and executed actions are displayed here. In other words, an exact picture of what is happening on site. The data of the task forces are entered manually into the system by the group and platoon leaders. In the process, virtual cards are drawn onto the situation picture to depict the situation accordingly. The system thus combines all components of the operation. The added value is that different parties (task forces in the scene, operator in the robotics command vehicle, task forces further away, etc.) access the same situation display and are aware of changes and updates in a fraction of a second.
What hurdles have you already overcome in the project?
We have already overcome numerous hurdles. The biggest challenge was to store the different data centrally and make it available. Another major hurdle we have overcome is the integration of the software into a real operation. We map the complete command structures in a fire brigade operation in the software and in the situation picture.
What facilitation do you provide for real-life use?
We facilitate the presentation of the entire situation. What was previously done with paper and slips of paper is graphically processed in this system using state-of-the-art technology and several instances can access the same data. Updating via radio, mail or SMS is no longer necessary and thus valuable time is gained for the operation.
What is happening right now?
At Minimax, we specialise in detecting and fighting incipient fires and are currently trying to approach, detect and fight these fires with a robot. The place of operation is usually an industrial company. Unlike the fire brigade, which only arrives at the scene when a fire has broken out, we are already at the scene with the robot. The robot, which is small, fast and manoeuvrable and has good detection, can fight the incipient fire. It has little extinguishing agent on board - the amount is enough to bridge the period until the emergency services arrive. We are not concerned with different driving platforms and how they are directed, but with the extinguishing application. So which extinguishing agent is best suited for an incipient fire. The robot is supported by a fire alarm system or a person.
What hurdles have you already overcome in the project?
The biggest challenge was the initially unfamiliar cooperation with a university compared to a commercial enterprise. Certain processes run very differently in a university than they do here, but we quickly got used to that.
What facilitation do you provide for real-life use?
The relief is that one already has a mobile device at the scene that provides data or has already initiated the extinguishing process. In this way, the emergency services can already work with the data obtained and intervene in a targeted manner. A large fire can be avoided with the extinguishing robot.
What is happening right now?
We are currently flying a drone that takes pictures and creates panoramic images. This livestream from the drone is sent to a server via the mobile phone. There, a selection of video image data is made and sent to a so-called web ODM via an Intelligent Image Hub, which contains an AI in the background that can detect audios. There, a 3D point cloud is generated to support the emergency services on site. The emergency services can thus view all objects in the dimensioned point cloud and draw conclusions about their size and shape.
What hurdles have you already overcome in the project?
First of all, I learned how to fly a drone with four other colleagues from Prof. Surmann. But not only the simple flying, but also all the autonomous assistance functions and sensor data processing, as well as the various AI algorithms that you have to pay attention to when flying. Learning under pandemic conditions was a real challenge. Then, together with the association and Fraunhofer IAIS, we expanded the robotics guidance vehicle into a small mobile data centre that runs cloud and server services as well as the AI processes I just mentioned. This allows us to send live image data via the network to various systems, for example to the IAIS situational awareness system or to the 3D mapping module. We have made the system so robust that it could already be used in a fire drill.
What facilitation do you provide for real-life use?
When the emergency services are on the scene, the drone can already take panoramic images and point clouds without a firefighter having to go into a dangerous situation, for example. A drone can provide the localisation of people in a building and thus pass on targeted information to the emergency services.
What is happening right now?
Currently, we are putting our demonstrator "Xplorer" into operation. On this research system, four communication channels are bundled as part of the development of interoperable communication interfaces in order to generate an optimal connection for the emergency services. The following networks are bundled:
- Local network - which serves as our research network
- Public mobile network
- Private mobile network in the 5G campus network band - which we carry on our communications lab
- Proprietary system - which is to be seen from the context of the current systems in rescue robotics.
These many systems are necessary because each system has its individual advantages and disadvantages. For example, a proprietary system only supports a very low data rate and is only available locally. When the robot moves out of the system's reception range, alternative systems are needed to ensure a continuous connection of all robotic systems. Therefore, we have developed methods that enable a combination of these networks. The goal is to provide continuous connectivity in real-world applications so that the robot is always available and data can be transmitted.
In parallel, the integration of the central demonstrators with our communication modules is currently underway. At the end of the integration sprint, all systems will be able to communicate with the control centre in RobLW via our communication interfaces.
What hurdles have you already overcome in the project?
For the first time, we have created the possibility to bundle different networks transparently via interoperable communication interfaces. This did not exist before. This bundling is being done in practice here in Bad Oldesloe for the first time. In addition, we have connected the central demonstrators of the project for the first time via the interoperable communication modules.
What facilitation do you provide for real-life use?
In real operations, the network connection often fails as soon as the robot moves in a building or a system suddenly no longer has availability. As a result, a reconnaissance can no longer take place and the emergency services can no longer act. Continuous availability through our interoperable communication interfaces is therefore of enormous importance for the emergency services.
What is happening right now?
The University of Lübeck takes care of the human factor in the overall project. The goal of a firefighter is to save endangered people in addition to fighting fires. Robots in firefighting operations should also at least recognise people. There are already many ways to detect people. However, environmental influences such as smoke, fire or partial occlusion often prevent localisation. Therefore, three approaches are combined in our module: Image processing detects 50% of a face, plus a body outline and a heat source. If there is a combination of the features, it can be assumed that it is a human being. In addition, we take care of assessing the health of injured people. We teach the robots how to recognise human vital signs and determine the severity of injuries. These are primarily heartbeat, breathing and body temperature.
What hurdles have you already overcome in the project?
In the meantime, we have built up two modules. One represents the state of the art and research in rescue robotics. Many of the methods for person detection published in the literature have been integrated. The module is equipped with thermal imaging and normal camera, a microphone array that locates people by their calls, and 8 gas sensors that have been used in the literature to detect buried people by their smell. A radar module measures and localises large but also the smallest movements such as pulsation, i.e. the skin surface movement caused by blood flow during a heartbeat. Of course, a moving emergency worker detected by radar generates a signal 1000 times stronger. But we know the frequency at which a heart beats and the frequency at which a normal person breathes. We filter according to these frequencies in order to not only detect injured people, but also to say something about their state of health.
What facilitation do you provide for real-life use?
The focus of the University of Lübeck is clearly on medical technology. Other groups navigate their robots autonomously through a rescue scenario while carrying our vital signs module. The module automatically detects people in its vicinity and records various vital parameters. This provides information on whether people are still alive and who needs help first. In this way, emergency forces can plan their deployment in a targeted manner and help where help is needed most quickly.
What is happening right now?
We are working on a cross-platform modularisation concept. The goal is to break down the robot functions into smaller units, so-called modules. These modules can be used independently of each other in different robot systems. The robots themselves are equipped with a so-called module carrier. This carrier provides a standardised receptacle for the DRZ modules that have been developed, which can themselves have very different capabilities. The operator can choose which devices and sensors are to be carried and thus decide, for example, whether a thermal camera, an optical camera, a 3D laser scanner or something else is needed for the current mission. This makes a tailor-made mission possible.
What hurdles have you already overcome in the project?
There was no comparable preliminary work that could be adapted. So we first had to think about what the whole thing should look like - from the division of space to the shapes of the modules, interfaces and connectors. The system should be as flexible as possible without too many specifications.
What facilitation do you provide for real-life use?
The benefit for the operator is that he can operate with known systems. This means that if one selects the robot (large, small with manipulation, etc.) depending on the operational situation, the module known to it can be placed on this robot. So completely independent of the ground platform. With the standardising idea, we can ensure that there are not so many isolated solutions. We want to map a set of basic skills via the modules that the operator is familiar with.
What is happening right now?
The radio traffic is listened to and interpreted by our system. The information resulting from the interpretation is placed in the context of the mission and forwarded to our process assistance component. Here, this information is used to visualise the status of the mission and suggest possible next steps.
What hurdles have you already overcome in the project?
Integration! Das ist in einem Projekt mit so vielen verschiedenen Partnern immer einer der größten Herausforderungen. Außerdem entwickeln wir Sprachmodelle für eine „real-live“ Domäne, für die es keine bzw. kaum Daten gibt. Aus Sicht der Prozessassistenz-Komponente ist dies auch der erste uns bekannte Versuch Methoden und Techniken des Geschäftsprozessmanagements in Einsätze der Feuerwehr zu integrieren. Wir glauben, dass uns dies gut gelungen ist.
What facilitation do you provide for real-life use?
We are developing two components to facilitate the deployment. With the first component for voice recognition and processing, data does not have to be entered; instead, components of the data are automatically extracted from the radio communication and entered into the DRZ system. The second component processes this data from a process-oriented perspective and visualises relevant information on the current status of the mission for the task forces. This reduces the cognitive load of the task forces, especially the leaders. The current status of a mission as well as possible future steps are then visible. In addition, the central collection and processing of data enables a better follow-up of the mission. For example, a functionality for the automatic creation of mission reports is currently being developed.
What is happening right now?
We test the developed systems in a practical application scenario where the cooperation of the different robotic systems is also tested.
What hurdles have you already overcome in the project?
Due to the individual environmental conditions, the application differs considerably from standardised industrial applications. In order to develop practical solutions for the challenging tasks at operation sites, we support the project partners in the implementation of user-related requirements and specifications, among other things by developing comparable test scenarios.
What facilitation do you provide for real-life use?
The integration of the various technical systems is a major challenge and must be tested in practical exercises. Among other things, the Dortmund Fire Brigade tests the interaction of the systems developed by the project partners in realistic scenarios. For this purpose, mission-related processes are practised under practical conditions.