Transaction

a9985e1712f7f5eaa82dae7b69ca697e6541aac1b3a5271ec7bbeeaaf4e64f4b
Timestamp (utc)
2024-08-08 22:47:22
Fee Paid
0.00000032 BSV
(
0.00065982 BSV
-
0.00065950 BSV
)
Fee Rate
2.065 sat/KB
Version
1
Confirmations
80,087
Size Stats
15,493 B

3 Outputs

Total Output:
0.00065950 BSV
  • jmetaB027e69d87efea25f40a095960392baeb8c81eb3bb32c9e36f29ba5a0f6880ec3c2@f912fa1cbeba7c5c37f7a31ec74014be418cfb90b295484afacfc728df6f92e9rss.item metarss.netMõ:<item> <title>Large-scale pathology foundation models show promise on a variety of cancer-related tasks</title> <link>https://www.microsoft.com/en-us/research/blog/large-scale-pathology-foundation-models-show-promise-on-a-variety-of-cancer-related-tasks/</link> <dc:creator><![CDATA[Brenda Potts]]></dc:creator> <pubDate>Thu, 08 Aug 2024 19:13:34 +0000</pubDate> <category><![CDATA[Research Blog]]></category> <guid isPermaLink="false"/> <description><![CDATA[<p>Microsoft researchers collaborated to release new pathology foundation models. Their report shows models benefit from diverse data, increased model size, and specialized algorithms to enhance the accuracy and applicability of cancer diagnosis and treatment.</p> <p>The post <a href="https://www.microsoft.com/en-us/research/blog/large-scale-pathology-foundation-models-show-promise-on-a-variety-of-cancer-related-tasks/">Large-scale pathology foundation models show promise on a variety of cancer-related tasks</a> appeared first on <a href="https://www.microsoft.com/en-us/research">Microsoft Research</a>.</p> ]]></description> <content:encoded><![CDATA[ <figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="1400" height="788" src="https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1.jpg" alt="Male Doctor Using Computer At Desk In Hospital" class="wp-image-1068084" srcset="https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1.jpg 1400w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-300x169.jpg 300w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-1024x576.jpg 1024w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-768x432.jpg 768w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-1066x600.jpg 1066w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-655x368.jpg 655w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-240x135.jpg 240w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-640x360.jpg 640w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-960x540.jpg 960w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Verchow-BlogHeroFeature-1400x788-1-1280x720.jpg 1280w" sizes="(max-width: 1400px) 100vw, 1400px" /></figure> <p>Imagine if pathologists had tools that could help predict therapeutic responses just by analyzing images of cancer tissue. This vision may someday become a reality through the revolutionary field of computational pathology. By leveraging AI and machine learning, researchers are now able to analyze digitized tissue samples with unprecedented accuracy and scale, potentially transforming how we understand and treat cancer.</p> <p>When a patient is suspected of having cancer, a tissue specimen is sometimes removed, stained, affixed to a glass slide, and analyzed by a pathologist using a microscope. Pathologists perform several tasks on this tissue like detecting cancerous cells and determining the cancer subtype. Increasingly, these tiny tissue samples are being digitized into enormous whole slide images, detailed enough to be up to 50,000 times larger than a typical photo stored on a mobile phone. The recent success of machine learning models, combined with the increasing availability of these images, has ignited the field of computational pathology, which focuses on the creation and application of machine learning models for tissue analysis and aims to uncover new insights in the fight against cancer.</p> <p>Until recently, the potential applicability and impact of computational pathology models were limited because these models were diagnostic-specific and typically trained on narrow samples. Consequently, they often lacked sufficient performance for real-world clinical practice, where patient samples represent a broad spectrum of disease characteristics and laboratory preparations. In addition, applications for rare and uncommon cancers struggled to collect adequate sample sizes, which further limited the reach of computational pathology.</p> <p>The rise of foundation models is introducing a new paradigm in computational pathology. These large neural networks are trained on vast and diverse datasets that do not need to be labeled, making them capable of generalizing to many tasks. They have created new possibilities for learning from large, unlabeled whole slide images. However, the success of foundation models critically depends on the size of both the dataset and model itself.</p> <h2 class="wp-block-heading" id="advancing-pathology-foundation-models-with-data-scale-model-scale-and-algorithmic-innovation">Advancing pathology foundation models with data scale, model scale, and algorithmic innovation</h2> <p>Microsoft Research, in collaboration with <a class="msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall" href="https://paige.ai/" target="_blank" rel="noreferrer noopener">Paige<span class="sr-only"> (opens in new tab)</span></a>, a global leader in clinical AI applications for cancer, is advancing the state-of-the-art in computational foundation models. The first contribution of this collaboration is a model named Virchow, and our research about it was recently published in <a class="msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall" href="https://www.nature.com/articles/s41591-024-03141-0" target="_blank" rel="noreferrer noopener">Nature Medicine<span class="sr-only"> (opens in new tab)</span></a>. Virchow serves as a significant proof point for foundation models in pathology, as it demonstrates how a single model can be useful in detecting both common and rare cancers, fulfilling the promise of generalizable representations. Following this success, we have developed two second-generation foundation models for computational pathology, called <a class="msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall" href="https://www.businesswire.com/news/home/20240808348827/en/Unlocking-the-Complexities-of-Cancer-Paige-Launches-Worlds-Largest-AI-Models-to-Revolutionize-Cancer-Diagnosis-with-Second-Generation-of-Virchow">Virchow2 and Virchow2G,<span class="sr-only"> (opens in new tab)</span></a> which benefit from unprecedented scaling of both dataset and model sizes, as shown in Figure 1.</p> <figure class="wp-block-image aligncenter size-full"><img decoding="async" width="2100" height="571" src="https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1.jpg" alt="A scaling plot of performance (y-axis) compared with the number of model parameters, left, and the number of training whole slide images, right. The middle panel describes how Virchow 2 increases the dataset size and diversity in addition to introducing pathology-specific training. Virchow 2G further increases the model size. " class="wp-image-1067133" srcset="https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1.jpg 2100w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1-300x82.jpg 300w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1-1024x278.jpg 1024w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1-768x209.jpg 768w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1-1536x418.jpg 1536w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1-2048x557.jpg 2048w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/Fig_scaling_teaser-1-240x65.jpg 240w" sizes="(max-width: 2100px) 100vw, 2100px" /><figcaption class="wp-element-caption">Figure 1. Virchow2G achieves state-of-the-art performance on pathology tasks by leveraging an enormous dataset and model size.</figcaption></figure> <p>Beyond access to a large dataset and significant computational power, our team demonstrated further innovation by showing how tailoring the algorithms used to train foundation models to the unique aspects of pathology data can also improve performance. These three pillars—data scale, model scale, and algorithmic innovation—are described in a <a href="https://www.microsoft.com/en-us/research/publication/virchow-2-scaling-self-supervised-mixed-magnification-models-in-pathology/" target="_blank" rel="noreferrer noopener">recent technical report</a>.</p> <div class="border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide" data-bi-aN="promo" data-bi-id="670821"> <p class="msr-promo__label text-gray-800 text-center text-uppercase"> <span class="px-4 bg-white display-inline-block font-weight-semibold small">Spotlight: Microsoft research newsletter</span> </p> <div class="row pt-3 pb-4 align-items-center"> <div class="msr-promo__media col-12 col-md-5"> <a class="bg-gray-300" href="https://info.microsoft.com/ww-landing-microsoft-research-newsletter.html" aria-label="Microsoft Research Newsletter" data-bi-cN="Microsoft Research Newsletter" target="_blank"> <img decoding="async" class="w-100 display-block" src="https://www.microsoft.com/en-us/research/uploads/prod/2019/09/Newsletter_Banner_08_2019_v1_1920x1080.png" alt="" /> </a> </div> <div class="msr-promo__content p-3 px-5 col-12 col-md"> <h2 class="h4">Microsoft Research Newsletter</h2> <p class="large">Stay connected to the research community at Microsoft.</p> <div class="wp-block-buttons justify-content-center justify-content-md-start"> <div class="wp-block-button is-style-fill-chevron"> <a href="https://info.microsoft.com/ww-landing-microsoft-research-newsletter.html" class="btn btn-brand glyph-append glyph-append-chevron-right" aria-label="Microsoft Research Newsletter" data-bi-cN="Microsoft Research Newsletter" target="_blank"> Subscribe today </a> </div> </div> </div><!--/.msr-promo__content--> </div><!--/.msr-promo__inner-wrap--> <span id="label-external-link" class="sr-only" aria-hidden="true">Opens in a new tab</span> </div><!--/.msr-promo--> <h2 class="wp-block-heading" id="virchow-foundation-models-and-their-performance">Virchow foundation models and their performance</h2> <p>Using data from over 3.1 million whole slide images (2.4PB of data) corresponding to over 40 tissues from 225,000 patients in 45 countries, the Virchow2 and 2G models are trained on the largest known digital pathology dataset. Virchow2 matches the model size of the first generation of Virchow with 632 million parameters, while Virchow2G scales model size to 1.85 billion parameters, making it the largest pathology model.</p> <p>In the report, we evaluate the performance of these foundation models on twelve tasks, aiming to capture the breadth of application areas for computational pathology. Early results suggest that Virchow2 and Virchow2G are better at identifying tiny details in cell shapes and structures, as illustrated in Figure 2. They perform well in tasks like detecting cell division and predicting gene activity. These tasks likely benefit from quantification of nuanced features, such as the shape and orientation of the cell nucleus. We are currently working to expand the number of evaluation tasks to include even more capabilities.</p> <figure class="wp-block-image aligncenter size-full"><img decoding="async" width="3648" height="591" src="https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled.png" alt="Left to right: An image of H&E stainedcolorectal tissue, the same image with expert annotation of cell types, and the same image with the most prominent features as determined by Virchow. Continuing, a second image of H&E stained colorectal tissue, the same image with expert annotation of cell types, and the same image with the most prominent features as determined by Virchow. In both cases, Virchow highlights the cancer cells." class="wp-image-1068066" srcset="https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled.png 3648w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled-300x49.png 300w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled-1024x166.png 1024w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled-768x124.png 768w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled-1536x249.png 1536w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled-2048x332.png 2048w, https://www.microsoft.com/en-us/research/uploads/prodnew/2024/08/blog_fig2_v2_labeled-240x39.png 240w" sizes="(max-width: 3648px) 100vw, 3648px" /><figcaption class="wp-element-caption">Figure 2. Virchow learned how to disentangle diverse content in pathology images. This figure shows three visualizations of stained colorectal tissue samples: the tissue samples themselves (A), expert annotations (B), and model representations (C). The cancer cells (B, red) are highlighted (C) when selecting for the most prominent content in the image.</figcaption></figure> <h2 class="wp-block-heading" id="looking-forward">Looking forward</h2> <p>Foundation models in healthcare and life sciences have the potential to significantly benefit society. Our collaboration on the Virchow models has laid the groundwork, and we aim to continue working on these models to provide them with more capabilities. At <a href="https://www.microsoft.com/en-us/research/lab/microsoft-health-futures" target="_blank" rel="noreferrer noopener">Microsoft Research Health Futures</a>, we believe that further research and development could lead to new applications for routine imaging, such as biomarker prediction, with the goal of more effective and timely cancer treatments.</p> <p>Paige has released Virchow2 on <a class="msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall" href="https://huggingface.co/paige-ai/Virchow-2">Hugging Face<span class="sr-only"> (opens in new tab)</span></a>, and we invite the research community to explore the new insights that computational pathology models can reveal. Note that Virchow2 and Virchow2G are research models and are not intended to make diagnosis or treatment decisions.</p> <span id="label-external-link" class="sr-only" aria-hidden="true">Opens in a new tab</span><p>The post <a href="https://www.microsoft.com/en-us/research/blog/large-scale-pathology-foundation-models-show-promise-on-a-variety-of-cancer-related-tasks/">Large-scale pathology foundation models show promise on a variety of cancer-related tasks</a> appeared first on <a href="https://www.microsoft.com/en-us/research">Microsoft Research</a>.</p> ]]></content:encoded> </item>
    https://whatsonchain.com/tx/a9985e1712f7f5eaa82dae7b69ca697e6541aac1b3a5271ec7bbeeaaf4e64f4b