STKO for OpenSees
July 2020


Did you know that there has never been a truly comprehensive and in-depth benchmarking of OpenSees? It is the opinion of several researchers and practictioners that this is a serious problem that needs to be addressed. Benchmarking is a fundamental part of the software development cycle that ensures the quality, reliability, and performance of software. As OpenSees is the backbone of modeling for seismic engineering, it seemed to us that this was a fairly urgent issue to address. We thought about this problem and decided to tackle it ourselves. After months of planning, setting benchmarks, and deciding how to approach the issue, we were ready to start.

First, let’s discuss what verification and validation benchmarking is. Verification is a two-part process to determine the accuracy of a computational model, and it’s solution. There are two parts to this process, the first is code verification, in which a software’s mathematical model and algorithms are tested to ensure that they are working correctly. The second is verifying the calculations, meaning that the discrete solutions of the mathematical model should be analyzed for accuracy.

Validation takes the data from the verification process and uses it to determine how accurately a model represents the real world according to the model’s intended use. Validation is achieved by comparing experimental and numerical results, ensuring that the software accurately predicts the experimental outcome.

OpenSees Validation

Adapted from: What is Verification & Validation? NAFEMS

The process of verification and validation can also help us find gaps and weaknesses in the software. For example, we identified some issues concerning some elements, in particular on non-linear geometry, which led us to develop a new ASDshellQ4 element, which covers the problems that we identified. To learn more, click to open the PDF below containing the data.


We are still in the verification phase of the process, but we have received some interesting data which we are eager to share. This just the first step, the beginning of a long journey in benchmarking. It will require time, dedication, and resources to complete. We are looking for collaborators, universities, research institutions, researchers, etc., who would be interested in working with us, implementing new problems, and contributing to this monumental undertaking. Please contact us at if you are interested.

This website uses cookies to ensure you get the best experience. Learn more