Sebastian Risi and his research group are combining insights from robotics, artificial intelligence and evolutionary biology to create a new breed of thinking machines.
- Machines and robots play an ever-increasing role in science and our society. However, so far, they are laboriously planned and constructed for very specific tasks, usually at considerable expense. Current machines are specialised to only perform a very limited and fixed set of functions, which has greatly limited their autonomy. This brittleness contrasts starkly with the capabilities of animals that can adapt to a specific habit through adaptation by natural selection. The aim of our research at the Robotics, Evolution and Art Lab (REAL) is to combine insights from robotics, artificial intelligence, and evolutionary biology to create more resilient machines that are not programmed to perform a certain task but instead evolve and adapt to perform the task at hand. The robot bodies and brains in our experiments are not designed by humans but instead by a process called artificial evolution, which is inspired by the principles of natural evolution, such as mutation and selection. The hope is that this approach may facilitate automatic design of more intelligent machines that can help us to solve challenging problems and also allow us to answer open questions in biology that are difficult to answer in vivo, the young researcher from ITU Copenhagen explains.
In addition to our scientific experiments, more and more companies are also starting to express interest in these advanced AI techniques, Sebastian Risi says.
Sebastian Risi and his research group are not beginners when it comes to supercomputing.
- My group and I have earlier used Amazon’s EC2 cluster and MIT’s StarCluster system to run evolutionary and GPU-based deep learning experiments.
When Sebastian Risi and his team heard about the supercomputer at SDU and the possibility to run a free pilot project they did not hesitate to apply.
- I heard about the DeIC National eScience Pilot Project from a colleague at the IT University of Copenhagen. We thought that the project and ABACUS 2.0 had exactly the setup we needed.
The technical support and 1,000 free compute node hours were ideal to get familiar with the Abacus 2.0 and its benefits. We are now running a long-term project on ABACUS 2.0 and will continue using it for our new research projects.
Sebastian Risi and his team found it easy to get familiar with ABACUS 2.0.
- The supercomputer at SDU was easy to use and importantly easy to administrate. In addition to our research group, we also often have students working on projects for a limited time for which the cluster is becoming essential. In these cases, it’s very convenient that you can easily add or remove users, he says.
Even though Sebastian Risi and his team have high personal technical skills, they needed some support during the processes:
- One benefit was certainly the initial technical support. Additionally, the easy administration of the cluster is something that I have been missing from other HPCs we used previously. The ABACUS 2.0 team was very helpful and always available when we needed them.
The use of supercomputing has influenced the research results in several ways, Sebastian Risi explains:
- Because we use population-based algorithms inspired by natural evolution, it is very important for us to be able to evaluate multiple potential solutions in parallel, which the ABACUS 2.0 cluster allows us to do. Instead of having to wait multiple days for experiments to finish, the turnaround time is much quicker by running them on a HPC. Additionally, the ABACUS 2.0 has allowed us to scale up our experiments in multiple significant ways. We now use more realistic robot simulations and increased the complexity of the artificial brain-inspired controllers through deep learning methods, which are inspired by how our brain represents knowledge in a hierarchical manner. Deep learning methods have led to drastic advances on benchmark tasks such as automatic recognition of images and speech but are very computationally intensive algorithm, which requires GPU-supporting HPC systems such as the ABACUS 2.0.
The young scientist sees a lot of opportunities and challenges of using HPC in the near future, Sebastian Risi explains:
- While it was earlier possible to run at least some AI experiments on your own desktop computer, AI algorithms are now more and more relying on the power of HPCs. There is certainly a trend towards more complex domains and more complex AI models with millions of parameters, therefore parallel computation and GPUs are quickly becoming a necessity in this research field. For example, we recently started combining deep learning with evolutionary approaches, which increases the computational demands even more and would just not be possible without a HPC. To keep the current rapid progress in AI and create the next generation of intelligent machines, an important challenge for the future will be to continue co-developing new hardware together with novel algorithms.
Sebastian Risi and his group had three papers accepted that used the ABACUS 2.0 to run experiments and some other papers have been submitted and are in review.
See also a film on YouTube : Evolving Modular Robots Using Direct and Generative Encodings.
Sebastian Risi is an Associate Professor at the IT University of Copenhagen. He received a diploma in computer science from the Philipps University of Marburg, Germany in 2007 and received a PhD in 2012 from the University of Central Florida. From 2012 to 2013 he was a postdoctoral fellow at Cornell University. More information about Sebastian Risi and his research can be found here: real.itu.dk and sebastianrisi.com.