Microinjection robot
Dispensing and sorting robot
Imaging robot
Microinjection robot for 3D cell cultures
Design and engineering
Address information

Design and engineering

Automation of repetitive tasks

Most experiments in a biological laboratory can be decomposed into smaller steps that can be performed with a robot. Each step may require a different robot design and intelligence (software). Hardware and software design form an ongoing collaborative process in our R&D together with clients. New solutions are chosen based on chance of success, market and our client’s interest.

We have a thorough experience with designing solutions that involve microfluidics, biological specimens, physics, mechanical, electrical and software engineering. Throughput is a driving force, as we believe that a robot needs to outperform an experienced human in terms of speeds, while offering a similar efficiency and a smaller variance. Only then it will be used by the experts that are working in your laboratory. Our robots will increase the yield of your experiments, for e.g. screens or daily tasks and reduce manual bias. The user interface and user-friendliness reduce the training time compared to learning to do the same procedure manually:

Manual experiments

Biological diversity, small specimen size and fragility can make it difficult to handle and to get a reasonable efficiency. During manipulation (e.g. microinjection), optical feedback is used, often with a microscope, to determine if it went well or failed.

Note: this feedback during experiments is almost never well explained in papers, and this can lead to difficulties to reproduce results. It also is a major cause of manual bias, as different people will find slightly different ways of doing things, and even the same people will change their technique over time. Here, automation can help.

A small developmental time window can make it more difficult to get enough data. Performing many of the same experiments is tedious. Experimental requirements, repetitive use of equipment, working in a dark room for microscopy, can have an effect on your health causing e.g. RSI or depression. Here, automation is required.

Throughput and efficiency

Even at a low efficiency, throughput can help to get enough data, given that there are enough samples available. At higher efficiency, higher throughput can help to get more data quickly and to use the experiment in a screen.

If the efficiency is low, automation will not help much, but tools can be developed to improve the reproducibility of manual work. When these tools enable a higher efficiency, automation of the procedure can be a next step:

Redesigning a task for automation

A first step in our R&D process is to investigate if a manual task can be redesigned in such a way that it can be done by a robot. For this, small prototypes of tools and equipment are build and tested on efficiency and throughput. This step requires biological specimens and time of researchers, experts, to evaluate and improve our approach.

When a task is successfully redesigned, a robot prototype can be build, and the manual operation of the robot can be replaced by machine learning and software control. In the past, biological variation and determination was hampering automation. But deep learning together with improvements in hardware (e.g. graphics cards) revolutionized the possibilities. We use deep learning for many steps in our automation and robot control.

Deep learning

Humans are outperformed in image-based classification and localization by deep learning. In the laboratory, many repetitive tasks include recognition of objects and biological specimens. When trained, running a deep learning algorithm takes 0.1 seconds, about 10-100 times faster than humans.

Deep learning requires many images to be recorded to catch many different angles, lighting conditions and biological variation. These images are (manually) annotated and used to train a computer algorithm to reproduce the annotation. The larger the variation, the more images are required. For example, we needed to annotate more than 11,000 images of zebrafish eggs to reproducibly find the first cell. The shape of the first cell depends on the developmental stage, which also affects the size and position. The orientation of the egg can determine if the first cell is visible or hidden behind the yolk. Finally, there are also other possibilities, such as an empty well, two or four cell stage, sick eggs etc. On the other hand, to recognize one day old zebrafish larva or adult Hyalella Azteca, we needed only 500 images.

Take the next step

Curious to know if your experiment can be automated, and at what costs? Please have a look at this checklist, to see if your experiment fits our capabilities:

When you can answer yes to these questions, let’s discuss your application in more detail. Please contact us here for more information.