学术资讯信息

Culture organotypic brain slices to study brain disorders

Researchers have developed a low-cost and easy-to-use model to study living brain cells in Alzheimer’s and Parkinson’s disease. Their novel “ring-inserts” allow the long-term culture and imaging of nerve cells directly in brain tissue, offering a new platform for drug testing, live-cell microscopy and disease research.

Alzheimer’s and Parkinson’s disease are among the most devastating neurodegenerative disorders, affecting millions of people worldwide. They progressively destroy specific types of brain cells, leading to memory loss, movement difficulties and severe cognitive decline. To understand how and why neurons die in these diseases-and to develop substances that can protect or regenerate them-reliable experimental models are essential. One promising approach are organotypic brain slice cultures: 150 μm-thick sections of brain tissue, about the width of a human hair, taken from young mice. These slices preserve the brain’s natural structure and organization, allowing scientists to study living nerve cells in a realistic environment outside the body.

image: Novel, ring-inserts' are prepared (1) and loaded with growth factors of interest using microcontact printing (2). Brain slices taken from postnatal day 8-10 mice are added (3) and nerve fibers grow along these prints. On living slices calcium imaging is performed (4) and nerve fibers are stained using immunofluorescence (5).

Credit: Alessa Gern, Patricia Lehmann, Judith Schäfer, Christian Humpel∗/ Laboratory of Psychiatry and Experimental Alzheimer’s Research, Department of Psychiatry and Psychotherapy, Medical University of Innsbruck, Austria

Such models are not only important for neuroscience research but also contribute to the so-called 3R principle in animal research: reduce the number of animals used, refine methods to minimize harm and replace animal experiments wherever possible. From a single mouse brain, around 50-100 slices can be obtained, significantly reducing the number of animals required. However, culturing brain slices remains technically demanding and expensive, especially when using commercial membrane inserts, which can cost up to 20 euros a piece and often limit the ability to observe living cells under the microscope. For example, a typical experiment using 200 brain slices would cost between €3,000 and €4,000 with commercial inserts.

To address this, researchers from the Psychiatric Laboratory at the Medical University of Innsbruck developed a simple and innovative solution: self-made “ring-inserts.” Each insert consists of a small silicone ring with a thin, permeable membrane attached to its base. Thin brain slices are placed directly onto this membrane, which sits just above a reservoir of nutrient-rich culture medium. The membrane allows nutrients to diffuse up into the tissue, keeping the slices alive for extended periods. Because the tissue lies flat and remains stable during cultivation, the inserts can be used directly for microscopic imaging of the living brain cells.

“Our ring-inserts cost only about 2 to 3 euros each, making them up to ten times more affordable than standard commercial inserts”, explains Univ. Prof. Dr. Christian Humpel, the senior author who led the study. “Despite their simplicity, they support high-quality brain tissue cultures and allow us to observe living neurons in real time, including how they grow, form connections and respond to stimulation.”

The same setup would cost only around €500 using the self-developed “ring-inserts”. This represents a potential cost reduction of up to €3,500 without compromising experimental quality.

The research focused on two key types of nerve cells involved in neurodegenerative diseases: cholinergic neurons, which help regulate memory and attention and are severely affected in Alzheimer’s disease, and dopaminergic neurons, which play a central role in movement and are lost in Parkinson’s disease.

To guide the growth of new nerve fibers, the team used a technique called microcontact printing (μCP). This involves stamping tiny lines of biological signals-known as growth factors-onto the membrane surface of the “ring-inserts”. These lines act like “tracks” or “paths” that direct the growth of nerve fibers. Growth factors are naturally occurring proteins that support the survival, growth and differentiation of neurons. Specifically, the researchers printed nerve growth factor (NGF), which promotes the development of cholinergic neurons, and glial cell line-derived neurotrophic factor (GDNF), which supports dopaminergic neurons. After printing, the organotypic brain slices were cultured directly on these patterned “ring-inserts”, allowing the neurons to grow along the predefined lines. Both cell types were successfully cultured on the “ring-inserts” and survived for two weeks.

Using these “ring-inserts”, the researchers were not only able to keep cholinergic and dopaminergic neurons alive in culture but also to study their structure and activity in detail. The inserts proved ideal for live-cell imaging, making it possible to observe individual neurons and their growing nerve fibers over time. To test whether the cells were still functionally active, the team used a method called calcium imaging. They added special fluorescent dyes, such as Rhod-4 and Fluo-4, that light up when calcium enters the cell, a signal that the neuron is responding to a stimulus. After applying a chemical trigger (potassium chloride), the researchers observed clear flashes of fluorescence, showing that the neurons were not only intact but also capable of functional activity and communication.

“We can monitor live neuronal activity in real time,” says first author Alessa Gern. “It gives us direct insight into how these cells function, respond to stimulation and potentially degenerate. This was not possible before with such cost-effective and simple tools like the “ring-inserts.”

The researchers also see future applications beyond basic neuroscience. Beyond its cost and ethical advantages, the “ring-insert” model is highly versatile. It can be combined with drug testing, genetic engineering, viral delivery, or high-resolution imaging techniques. The inserts can also be modified to support more complex co-culture systems, such as combining brain slices with blood vessel cells to simulate the blood–brain barrier, a key structure for studying how substances enter or are blocked from the brain.

Looking ahead, the team mentions that the method could also be adapted for use with adult brain tissue or even with human samples obtained from surgical procedures. This would bring laboratory models even closer to real disease conditions and could help accelerate the discovery of new therapies for Alzheimer’s, Parkinson’s, and related disorders.

“Our vision is to enable long-term, real-time observation of living brain cells in a flexible and affordable setup,” says Univ. Prof. Dr. Humpel. “This could help transform how we study the brain and how we develop treatments, while also supporting the move away from animal experiments.”

In summary, the ring-insert system developed in Innsbruck provides a cost-effective, scalable, and ethically responsible platform for studying the living brain. By combining realistic tissue models with live cell imaging capabilities, opens new doors in neuroscience, disease modeling and drug development, bringing researchers closer to understanding and treating the brain’s most complex disorders.

This paper was published in Biofuctional Materials (ISSN: 2959-0582), an online multidisciplinary open access journal aiming to provide a peer-reviewed forum for innovation, research and development related to bioactive materials, biomedical materials, bio-inspired materials, bio-fabrications and other bio-functional materials.

Citation: Gern A, Lehmann P, Schäfer J, Humpel C. Organotypic mouse brain slices: low-cost “ring-inserts” to study cholinergic and dopaminergic neurons with live cell imaging with an emphasis on calcium imaging. Biofunct. Mater. 2025(2):0011,

Source from [https://www.eurekalert.org/news-releases/1094075].

232025-10-24

Machine learning ushers in a new era for advanced nuclear materials research

Shanghai, August 21, 2025 — Nuclear energy is widely recognized as one of the most promising clean energy sources for the future, but its safe and efficient use depends critically on the development of robust nuclear fuels and structural materials that can endure extreme environments. A newly published review in AI & Materials highlights how machine learning (ML) is transforming this field, enabling scientists to accelerate discoveries, optimize performance, and overcome long-standing challenges in nuclear materials research.

image: Schematic illustration of applying machine learning to nuclear materials research: addressing reactor challenges through data-driven property prediction and advancing toward hybrid ML–physics frameworks.

Credit: Credit: Chaoyue Jin and Shurong Ding / Fudan University

The article, titled “Machine learning in research and development of advanced nuclear materials: a systematic review for continuum-scale modelling,” was authored by Chaoyue Jin and Professor Shurong Ding from the Institute of Mechanics and Computational Engineering at Fudan University. It provides an in-depth systematic review of how ML has been applied to nuclear fuels and structural materials, particularly at the continuum scale.

Why Nuclear Materials Matter

Inside a nuclear reactor, materials are exposed to intense neutron irradiation, high temperatures, mechanical stresses, and corrosive chemical environments. These conditions cause dynamic and coupled thermo-mechanical responses, such as swelling, embrittlement, creep deformation, and loss of thermal conductivity. If not properly understood and controlled, such behaviors can compromise reactor safety.

Traditionally, characterizing these effects has required expensive, time-consuming, and sometimes dangerous irradiation experiments. While theoretical modeling and large-scale numerical simulations have provided valuable insights, the complexity of multi-scale interactions often limits their predictive accuracy.

This is where machine learning comes in.

The Role of Machine Learning

Over the past decade, ML techniques have demonstrated remarkable potential in materials science, from alloy design to catalyst discovery. In the nuclear field, ML is now emerging as a key tool to:

Analyze complex microstructures: Convolutional neural networks (CNNs) are used to identify grain boundaries, porosity, and irradiation damage in fuels and claddings, extracting patterns that are invisible to the human eye.

Predict thermal conductivity and mechanical behavior: Deep neural networks (DNNs) and regression models can rapidly estimate properties such as thermal conductivity or yield strength, accelerating the evaluation of material performance.

Optimize processing and fabrication: ML models link manufacturing parameters with final material microstructures, enabling researchers to fine-tune processes like annealing or rolling to achieve superior performance.

Integrate with physics-based models: By combining ML with finite element simulations (FEM), researchers are building hybrid frameworks that can generate synthetic datasets, reduce reliance on costly experiments, and ensure that ML predictions remain physically meaningful.

Challenges and Limitations

Despite its promise, the review emphasizes that applying ML in nuclear materials research is not without hurdles. The scarcity of high-quality irradiation datasets remains a major bottleneck, as collecting reliable in-reactor data is both dangerous and expensive. Moreover, selecting the right features that faithfully capture the underlying physics—such as porosity, grain boundary evolution, or irradiation dose—is a non-trivial task.

Another challenge lies in the interpretability of ML models. “Black-box” predictions are insufficient for nuclear applications, where safety and reliability are paramount. The authors argue that future efforts must focus on hybrid ML–physics approaches, embedding physical laws and mechanistic insights directly into ML frameworks.

Looking Ahead

The review outlines several promising directions for future research:

Hybrid frameworks that integrate ML with physical constraints and governing equations.

Time-dependent modeling that captures temporal and spatial correlations in materials under irradiation.

Physics-informed dataset generation using FEM to overcome the scarcity of experimental data.

Inverse design approaches, where ML can suggest the optimal composition or microstructure to achieve a desired performance.

According to Professor Shurong Ding, “Machine learning is not a replacement for physics-based understanding, but a powerful partner. By combining the strengths of both, we can significantly shorten the development cycle of advanced nuclear materials and enhance reactor safety.”

About the Article

Title: Machine learning in research and development of advanced nuclear materials: a systematic review for continuum-scale modelling

Authors: Chaoyue Jin, Shurong Ding*

Journal: AI & Materials, 2025(2):0012

DOI: https://doi.org/10.55092/aimat20250012

Source from [https://www.eurekalert.org/news-releases/1095934].

452025-10-24

【NCIC 2025】第三届网络、通信与智能计算国际会议在焦作圆满落幕

由河南理工大学主办、北京信息科技大学联合主办,爱迩思出版社(ELSP)、ESBK国际学术中心、AC学术平台支持的第三届网络、通信与智能计算国际会议(NCIC 2025),于2025年10月18日在焦作圆满落幕。本次会议汇聚了多个地区的专家学者,围绕网络、通信与智能计算等前沿热点展开了深入交流与探讨。

图1 大会合照

图2 大会现场

10月18日上午,大会联合主席、河南理工大学物理与电子信息学院院长李明教授进行开幕致辞。他代表组委会向参会的专家和来宾表示热烈欢迎,李教授鼓励学者深入讨论,打破学科壁垒、突破技术瓶颈,实现创新链、产业链、价值链的有机衔接。

图3 大会联合主席李明教授开幕致辞

随后,河南理工大学副校长翟耀南教授进行了欢迎致辞。翟教授欢迎了各位到场专家学者,简要介绍了河南理工大学,也希望参会的各位朋友在参加会议收获满满之余,也能领略焦作的自然风光和人文魅力。

图4 河南理工大学副校长翟耀南教授致欢迎辞

本次大会汇聚了高水平行业专家和学者,会议邀请到了多位在各自领域享有盛誉的国际知名专家:IEEE Fellow清华大学高飞飞教授,日本国立室兰工业大学副校长、日本工程院外籍院士董冕雄教授,IEEE Fellow加拿大阿尔伯塔大学Witold Pedrycz教授,北京交通大学赵军辉教授,北京交通大学王公仆教授,他们带来了精彩的主旨报告,为大会增添了浓墨重彩的一笔。

图5 IEEE Fellow,清华大学高飞飞教授作主旨报告

图6日本工程院外籍院士董冕雄作主旨报告

图7 IEEE Fellow Witold Pedrycz教授作线上主旨报告

图8 北京交通大学赵军辉教授作主旨报告

图9 北京交通大学王公仆教授作主旨报告

在下午的分论坛环节,与会人员围绕机网络、通信与智能计算主题进行了深入研讨。

图10 分论坛一合照

图11 分论坛二合照

大会举行了晚宴暨颁奖仪式。在温馨的氛围中,与会者们借此机会互相交流,增进了彼此的了解和合作。随着晚宴的结束,第三届网络、通信与智能计算国际会议也圆满落幕。

颁奖晚宴

第三届网络、通信与智能计算国际会议(NCIC 2025)的成功举办,不仅为学术界和产业界搭建了一个思想碰撞、成果共享的平台,更进一步促进了相关领域的跨界融合与协同创新。相信在各方的持续努力与支持下,NCIC将不断汇聚智慧与力量,打破学科壁垒、突破技术瓶颈,快速转化为推动产业升级推动科技进步与产业发展,共同开启智能时代的新篇章。

552025-10-24

Improvement of robot learning with combination of decision making and machine learning for water analysis

This study published in Robot Learning has been focused on water analysis using the combination of decision making and machine learning for a recently developed robotic system. The unique procedure the researchers have applied has significant impact on the improvement of robots’ performance that will be able to detect, analyze and distinguish drinking water on Earth and other planets.

image: Improvement of robot learning with combination of decision making and machine learning for water analysis

Credit: Taraneh Javanbakht/école de Technologie Supérieure, Arbnor Pajaziti/University of Prishtina, Shaban Buza/University of Prishtina

Robot learning is an important ability required for water analysis without human intervention. For this purpose, the development of robots making them learn appropriate tasks and perform activities efficiently is based on the application of their leveraging skills and training. The benefits of autonomous robots able to perform water analysis are rapid response in crisis situations, sustainable resource management, planetary exploration and reduction in human intervention. Although robot learning has been investigated for the development of robots’ different tasks such as object manipulation, item cleaning and interactive or multi-task learning, it has not been investigated for water analysis using the combination of decision-making and machine learning (ML).

Drinking water detection and distinction are important robots’ tasks. Heavy metals and organic materials are toxic water pollutants that have caused health and environmental problems worldwide. It is required to develop robots able to detect these contaminants and distinguish drinking water on Earth and other planets without human intervention.

For years, ML has been investigated without combination with decision making for water analysis. However, human decision making based on categorization is a preliminary step for learning. Therefore, the combination of both processes has been required for a more appropriate analysis of water samples for robotics. Therefore, the researchers in the current work have applied the combination of decision making and ML for improvement of robot learning for water analysis.

For the first time, the researchers have applied the combination of decision making using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and ML using Microsoft Visual Studio codes in Python. The Random Forest Classifier, a supervised ML algorithm, was used for water analysis.

The information on more than 3200 water samples available in the dataset section of the Kaggle website; “Water Quality and Potability Dataset” was used for water analysis. Dataset preprocessing was performed by completing the data table before analysis.

The TOPSIS analysis of water samples showed that the candidates having high values of profit criteria and low values of cost criteria had a better rank. The same result was obtained in the analysis of the physicochemical properties of and ingredients of water samples. The ML simulation showed that using the modified code could improve the learning accuracy to 69%, which improved to 73% after using Synthetic Minority Over-sampling Technique (SMOTE) for class balancing and tuning the hyperparameters.

The robotic system that has been designed and developed for the application of simulation software includes electronic devices such as DC thruster or drive motors, battery, solar panel and DC/DC converter. In this system, control has been carried out via a remote control and a command receiver. The remote control having four channels showed an adjustable speed, Moreover, the direction could be adjusted well and easily to make the model go straight. The information receiving board at the input end showed reverse connection protection, the output has a self-recovery fuse, to adapt and stable signal.

The designed robotic system would be able to deposit the water samples in the storage area where they would be collected by steering the ship via an arm. The equipment needed to build the ship’s platform containing motors, battery, solar panel, DC/DC converter, robotic arm, and sensors have been developed for monitoring the physical and chemical parameters of water. After collecting water samples using the motorized platform and robotic arm. sensors would measure the physicochemical parameters of the samples in real time. These sensor readings are processed by the onboard system, where the trained ML model combined with the decision-making process would classify the water samples as drinkable or undrinkable. The classification results guide the robot’s decision-making process for storing the sample or discarding it. Therefore, hardware control would be integrated with intelligent analysis.

To address the growing need for efficient water resource management and exploration, advanced robotic systems would be equipped with autonomous water sample analysis capabilities. These enhancements will be realized through the integration of cutting-edge technologies in robotics, sensor systems, and artificial intelligence. The challenges related to sensor accuracy, data noise, and system scalability provide a balanced perspective on the feasibility of the developed robotic water analysis system by highlighting both potential limitations and strategies for overcoming them. Therefore, addressing these issues would ensure that the system remains practical and effective for real-world applications.

While the obtained results are promising for the future of robotics, further investigations including the application of sensors would be required for the implementation of the results of the current work in the developed robotic system. This could help develop a unique robotic platform for the detection, analysis and distinction of drinking water on Earth and other planets.

Javanbakht T, Pajaziti A, Buza S. Combination of decision making and machine learning for improvement of robot learning for water analysis. Robot Learn. 2025(2):0006, https://doi.org/10.55092/rl20250006

Source from [https://www.eurekalert.org/news-releases/1095321].

412025-10-24

AI review unveils new strategies for fixing missing traffic data in smart cities

A new review published in Artificial Intelligence and Autonomous Systems(AIAS) highlights how artificial intelligence can tackle the pervasive problem of missing traffic data in intelligent transportation systems. The study categorizes and compares leading data imputation methods, offering a clear roadmap for researchers and city planners to improve traffic management and smart city operations.

As cities worldwide deploy more sensors and intelligent systems to manage traffic, a hidden problem is undermining their efforts: missing data. Sensor failures, communication dropouts, and harsh environmental conditions often lead to gaps in traffic information, complicating everything from real-time traffic light control to long-term urban planning.

image: The review covers missing data patterns, public datasets, evaluation metrics, and a dual classification of imputation methods into structure-based and learning-based approaches, alongside current challenges and performance comparisons.

Credit: Kaiyuan Wang, Xiaobo Chen, Nan Xu/ Shandong Technology and Business University

In a comprehensive new review published in AIAS, researchers from Shandong Technology and Business University survey the latest AI-powered techniques designed to fill in these data gaps automatically. The paper, titled “A Brief Review on Missing Traffic Data Imputation in Intelligent Transportation Systems,” provides a systematic classification of existing methods and compares their performance under various missing data scenarios.

“When traffic data is incomplete, it affects signal timing, congestion prediction, and even emergency response planning,” says Kaiyuan Wang, the lead author of the study. “Our goal was to provide a clear framework to help choose the right method for the right situation.”

The review divides data imputation techniques into two broad categories: structure-based methods, which rely on the inherent low-rank structure and spatiotemporal patterns of traffic data; and learning-based methods, which use deep learning models like GANs, GNNs, and attention mechanisms to learn complex data relationships.

“Structure-based methods are often more interpretable and work well with moderate missing rates,” explains Dr. Xiaobo Chen, the corresponding author. “But in cases of high missing rates or complex patterns, learning-based methods—especially those using graph neural networks or generative models—can be more powerful.”

The study also summarizes publicly available datasets—such as PeMS, METR-LA, and TaxiBJ—and standard evaluation metrics like MAE, MAPE, and RMSE, providing a valuable resource for researchers looking to benchmark their models.

Perhaps most practically, the team tested multiple methods under unified conditions and developed a decision-making workflow to help users select the best approach based on missing data type, rate, and available computational resources.

Despite these advances, challenges remain. “Real-world traffic data is messy—it’s not just randomly missing. It can be influenced by traffic signals, weather, or even time of day,” says Wang. “We also need methods that are fast enough for real-time use and that can quantify uncertainty in their predictions.”

Looking ahead, the authors highlight several promising directions, including multi-source data fusion, lightweight AI models for edge computing, and uncertainty-aware imputation techniques.

“We’re moving toward a future where AI doesn’t just fill in missing data—it understands why it’s missing and how to best reconstruct it in a way that supports safer and smarter cities,” adds Dr. Chen.

This review offers both a scholarly resource and a practical guide for anyone working in smart transportation, urban analytics, or AI-enabled infrastructure management.

The paper, “A Brief Review on Missing Traffic Data Imputation in Intelligent Transportation Systems,” is now available in AIAS.

Reference:

Wang K, Chen X, Xu N. A brief review on missing traffic data imputation in intelligent transportation systems. Artif. Intell. Auton. Syst. 2025(2):0006, https://doi.org/10.55092/aias20250006

Source from [https://www.eurekalert.org/news-releases/1096045].

502025-10-24

西湖大学天文系成立,毛淑德担任创系系主任

近日,西湖大学天文系成立大会在云谷校区举行。西湖大学徐益明讲席教授、副校长邓力现场宣读了《关于成立西湖大学天文系的决定》,并宣布首任系主任由著名天体物理学家毛淑德教授担任。

揭牌仪式

澳大利亚国立大学第12任校长、2011年诺贝尔物理学奖得主、美国国家科学院院士Brian Schmidt,莱顿大学分子天体物理教授、国际天文联合会前主席Ewine van Dishoeck,加州大学圣塔克鲁兹分校天体物理学荣休教授、美国科学与艺术学院院士Douglas Lin(林潮)亲临现场,受聘为西湖大学天文系国际顾问委员会委员。

西湖大学校长施一公表示:“天文学不仅承载着人类对宇宙最深切的好奇,更是推动原始创新和技术变革的重要引擎,西湖大学秉持追求卓越的科学精神,注重基础研究与原始创新,天文学科的设立,不仅完善了西湖大学理学板块的布局,更将成为推动多学科深度交叉融合的重要战略支点。”

毛淑德

现场,毛淑德介绍了西湖大学天文系的设想与规划。他表示,西湖大学天文系将致力于探索宇宙的奥秘、深化人类对于宇宙的理解。“希望西湖大学天文系未来能成为世界天文研究的重要中心之一,在科学研究和人才培养方面达到国际一流水平。”

毛淑德简介

毛淑德,1966年11月出生于浙江义乌。1988年本科毕业于中国科学技术大学,1992年获美国普林斯顿大学天体物理博士。1992年至1999年在美国哈佛-史密松天体物理中心和德国马普天体物理研究所做冠名博士后。2000年至2010年在英国曼彻斯特大学物理和天文系历任讲师、副教授、教授。2010年起担任中国科学院国家天文台星系宇宙学部主任。2014年10月任清华大学教授、天体物理中心主任;2019年4月至2025年1月,担任清华大学天文系首任系主任。2025年初,全职加入西湖大学,受聘天文学讲席教授、天文系首任系主任。

毛淑德一直从事理论天体物理研究,包括系外行星搜寻、引力透镜、星系动力学、星系形成和演化等,提出的微引力透镜效应系外行星探测方法已发现200多颗行星,成为中美空间望远镜项目的核心观测方法之一。2007年获德国洪堡基金会Bessel研究奖,在《科学》《天体物理杂志》等期刊发表论文140余篇。

来源:西湖大学,仅用于学术分享,如有侵权请联系删除。

762025-10-22