Duleba Dominik, Martínez-Aviñó Adria, Revenko Andriy, Johnson Robert P
School of Chemistry, University College Dublin, Belfield, Dublin 4, Ireland.
ACS Meas Sci Au. 2025 Apr 15;5(3):353-366. doi: 10.1021/acsmeasuresciau.5c00023. eCollection 2025 Jun 18.
In nanoscale sensors, understanding and predicting sensor sensitivity is challenging as the physical phenomena that govern the transduction mechanism are often highly nonlinear and highly coupled. The sensitivity of a sensor is related to both the magnitude of the analyte-caused signal change and the random error-caused fluctuation of the sensor's output. The extent to which these can be controlled, by carefully designing either the geometric or operating conditions of the sensor, determines the difference in signal output between the presence and absence of the analyte, as well as the impact of random errors on the distribution of these signal outputs. Herein, we use ion-current-rectifying nanopore sensors as a simplified case study to show how geometric and operating parameters can enable sensitivity optimization. Finite element analysis is used to obtain distributions of the sensor output, and then, Sobol analysis is used to highlight the most important contributions to sensor output errors. Furthermore, the magnitude of the signal change is considered alongside the spread of the output to calculate and optimize the sensor sensitivity. We highlight that the most important parameters contributing to the output variance are geometric. We observed that as the sensor is operated at smaller pore radii and lower electrolyte concentrations, the influence of the cone angle errors increases, the influence of the pore radius errors decreases, and the output becomes broader. We also show that the highest sensitivity is expected for larger pores operated at low electrolyte concentrations, and our simulation results are validated by experimental results. Recommendations to achieve optimum sensitivity are given for a range of nanopore scenarios in which ion-rectifying nanopore sensors may be used. This work aims to provide a framework for the nanoscale community to optimize sensitivity using simulations, as the analysis highlighted herein is viable for any system that can be modeled using continuum physics.
在纳米级传感器中,理解和预测传感器灵敏度具有挑战性,因为支配传感转换机制的物理现象通常是非线性且高度耦合的。传感器的灵敏度既与分析物引起的信号变化幅度有关,也与随机误差引起的传感器输出波动有关。通过精心设计传感器的几何条件或操作条件,能够控制这两者的程度,这决定了分析物存在与不存在时信号输出的差异,以及随机误差对这些信号输出分布的影响。在此,我们以离子电流整流纳米孔传感器为例进行简化研究,以展示几何参数和操作参数如何实现灵敏度优化。利用有限元分析获得传感器输出的分布,然后,使用索伯尔分析突出对传感器输出误差最重要的影响因素。此外,在考虑输出分布的同时考虑信号变化的幅度,以计算和优化传感器灵敏度。我们强调,对输出方差贡献最大的参数是几何参数。我们观察到,随着传感器在较小的孔径和较低的电解质浓度下运行,锥角误差的影响增加,孔径误差的影响减小,输出变得更宽。我们还表明,在低电解质浓度下运行的较大孔径有望实现最高灵敏度,我们的模拟结果得到了实验结果的验证。针对一系列可能使用离子整流纳米孔传感器的纳米孔场景,给出了实现最佳灵敏度的建议。这项工作旨在为纳米尺度领域提供一个利用模拟优化灵敏度的框架,因为本文所强调的分析对于任何可以用连续介质物理学建模的系统都是可行的。