The sketch2model algorithm is written in python and makes use of the excellent open-source numpy, scipy and scikit-image packages. This website is a flask app and is styled with twitter-bootstrap.
The idea behind sketch2model is that a user should be able to easily create forward seismic models. Modeling at the speed of imagination, allowing seamless transition from idea to synthetic seismic section. It should happen quickly enough to be incorporated into a conversation. It should happen where collaboration happens.
Geophysicists like to model wedges, and for good reasons. However, wedge logic can get lost on colleagues. It may not effectively demonstrate the capability of seismic data in a given situation. The idea is not to supplant that kind of modeling, but to enable a new, lighter kind of modeling. Modeling that can easily produce results for twelve different depositional scenarios as quickly as they can be sketched on a whiteboard.
We tackled this idea at the 2015 Geoscience Hackathon hosted in Calgary, Alberta by Agile Geoscience. A great experience where teams attacked projects in geoscience computing over a weekend. The programming language of choice for this project is Python, for reasons nicely articulated by Agile in their blog post.
Building something mobile to turn a sketch into a synthetic seismic section is a pretty tall order for a weekend. We decided to take a shortcut by leveraging an existing project: Agile’s online seismic modelling package, modelr. The fact that modelr works through any web browser (including a smartphone) kept things mobile. In addition, modelr’s existing functionality allows a user to upload a png image and use it as a rock property model. We chose to use a web API to interface our code with the web application (as a bonus, our approach conveniently fit with the hackathon’s theme of Web). Using modelr’s capabilities, our hack was left with the task of turning a photo of a sketched geologic section into a png image where each geologic body is identified as a different color. An image processing project!
We aimed to create an algorithm robust enough to handle any image of anything a user might sketch while accurately reproducing their intent. Marker on whiteboard presents different challenges than pencil on paper. Light conditions can be highly variable. Sketches can be simple or complex, tidy or messy. When user leaves a small gap between two lines of the sketch, should the algorithm take the sketch as-is and interpret a single body? Or fill the small gap and interpret two separate bodies?
Matteo has used image processing for geoscience before, so he landed on an approach for our hack almost instantly: binarize to distinguish sketch from background (turn color image into a binary image); identify and segregate geobodies; create output image with each body colored uniquely.
Python has functions one can use to binarize a color image, but for our applications, the results were very inconsistent. We needed a tool that can work for a variety of media in varying lighting conditions. Fortunately, Matteo had some tricks up his sleeve to precondition the images before binarization. We landed on a robust flow that can binarize whatever we throw at it. We will add more on this later.
Once the image is binarized, each geobody must be automatically identified as a unique element. If the sketch were reproduced exactly as intended, a segmentation function would do a good job. The trouble is that the sketch captured is rarely the same as the one intended -- an artist may accidentally leave small gaps between sketch lines, or the sketch medium can cause unintentional effects (for example, whiteboard markers can erase a little when sketch lines cross). We applied some morphological filtering to compensate for the sketch imperfections. If applied too liberally, this type of filtering causes unwanted side effects. We will add more information on our approach later.
Compared to the binarization and segmentation, generating the output is a snap. With this final step, we’ve transformed a sketch into a png image where each geologic body is a different color. It’s ready to become a modelr synthetic section.
Sketch2model was a working prototype by the end of the hackathon. It wasn’t the most robust algorithm, but it worked on a good proportion of our test images. We were excited enough to continue development after the hackathon. Evidently, we weren’t the only ones interested in further development because sketch2model came up on the February 17th episode of Undersampled Radio.
This is so cool. Draw something on a whiteboard and have a synthetic seismogram right on your iPad 5 seconds later. I mean, that’s magical.
The algorithm and web interface have progressed to the point that you can use it on your own images. For those interested in the nuts and bolts of the algorithm, sketch2model has a repository on GitHub. Information posted on these sites is scant right now, but we are working to add more information and documentation through time.