(1225-B) Automated Logging of Process Metrics on a Robotic Liquid Handler: Daily Usage, Run Times, Errors and More
Monday, February 5, 2024
2:00 PM – 3:00 PM EST
Location: Exhibit Halls AB
Abstract: Optimizing laboratory processes are essential for maximizing throughput and minimizing overhead. Determining how to optimize a process requires access to meaningful and accurate metrics regarding those processes. Most automated liquid handlers capture these metrics by logging large text files. Tabulation, storage, and retrieval of this data can be very cumbersome, especially for those in decision-making roles who may not be immediately familiar with the automation and the generated files. Herein, we describe a custom application used to sweep Hamilton trace files for important metrics. The application is added to methods by importing a simple Hamilton Venus library and adding a single step to the beginning and end of the method. The script will then automatically run the application, which collects information such as the start and stop times, unique run ID, serial number of the Hamilton robot, the operator, the method name, the number of samples, and the number of batches. It also logs any errors encountered during the run, and whether the error triggered custom error handling (decisions pre-made by the programmer), or if the operator responded to the error and how. All data is parsed from the trace file produced during every run, which is copied to a network location. Data is recorded automatically and is stored in a sqlite3 database, allowing users to retrieve and display the data in any manner. In this case a Python DashApp was used to retrieve and display data. This library was used to monitor the metrics of 13 different Hamilton robots running more than 15 different assays across 3 different laboratories. The collected data was used for comparing processes run across different robots and across different sites. For example, run times per number of samples were found to be significantly higher at some laboratories, indicating suboptimal programming resulting in longer runtimes. The efficiency of individual operators was also evaluated by comparing run times for each user. Instrument utilization was also evaluated providing an accurate and unbiased assessment of instrument capacity and enabling balancing of workloads. Error logs enabled an easy method of remote troubleshooting. Each error was tied the unique run ID, which was then associated to specific operators, or methods. The percentage of successful versus failed was also easily monitored, allowing for flaws in programming or processes to be discovered.