I would have funding for this and if I find the right person and there is community support, I could imagine leading this effort. I think Napari is a good platform, as through the CZI funding it got quite some momentum and is actively developed, and it provides a framework that makes GUIs, plugins, data and image handling and rendering much simpler. I am a bit torn between just interfacing this new code from SMAP and otherwise use a platform that works very well for us and re-implementing the main functionality of SMAP in Python/Napari. DECODE and exciting projects still in development). Newer projects in the lab however use Python (e.g. In my own group we developed SMAP (MATLAB based), which after more than 10 years of development has a lot of functionality, it fast enough, and as it is used quite a lot by us most of the regularly used plugins are also well tested. This started with an effort to implement a powerful 3D viewer for SMLM data for Napari, Martin Weigert is working on this. Indeed, I have been discussing with people in the field to develop a Python-based SMLM analysis software using Napari as a platform. Thanks J-K for letting me know about this thread. Let me know what you think and hopefully we can get the ball rolling! I am not a coder but as a long-time SMLM user, I have experience with almost all these and would be happy to assist by sharing knowledge, providing test data and testing software. Some participants to the thread that are also present here: - hopefully others will join. Other “bricks” are GPU-accelerated fitting libraries (GPUfit, Splinefit), deep-learning based localization (DeepSTORM, DECODE), visualization solutions (Napari, VR like vLUME and Genuage). It mentions a handful of SMLM software that are in active development these days such as SMAP, Picasso, or PyME. There are already several insight in the Twitter thread linked above. How would this effort be best directed? What would be the best platform (Python, ImageJ/Fiji, something else?) What are the existing, modern bricks that could be used for that? I think at this point it would be good to think about what form an effort toward “ThunderSTORM 2.0” could be. Since 2017, there were a lot of innovation in all aspects of SMLM processing: PSF fitting (c-spline, in-situ retrieval), localization of blinking events (phasor, GPU-accelerated MLE and c-spline fitting, deep-learning based localization inference), new and optimized acquisition schemes (spectral demixing, salvaged fluorescence), quality assessment and validation of the fitted events and obtained image, assessment of the image resolution (FRC, FSC, decorrelation, NeNa)… Some of these are now validated and used by several labs, and might be very useful for the broader community. Some questions that are worth asking at this point are: what makes ThunderSTORM so successful and broadly used, even 5 years after the last commit? Is it because it’s ImageJ-based? Is it the combination of functions that it offers? Right now, the dominant tool for that is ThunderSTORM ( GitHub - zitmen/thunderstorm: ThunderSTORM: a comprehensive ImageJ plugin for SMLM data analysis and super-resolution imaging), an lmageJ plugin that hasn’t been updated (except additions of a phasor localization algorithm) since 2017.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |