CIRTECH Institute, Ho Chi Minh City University of Technology (HUTECH), Ho Chi Minh City, Viet Nam.
CIRTECH Institute, Ho Chi Minh City University of Technology (HUTECH), Ho Chi Minh City, Viet Nam; Department of Architectural Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Republic of Korea.
ISA Trans. 2020 Aug;103:177-191. doi: 10.1016/j.isatra.2020.03.033. Epub 2020 Apr 10.
We investigate a novel computational approach to computational structural optimization based on deep learning. After employing algorithms to solve the stiffness formulation of structures, we used their improvement to optimize the structural computation. A standard illustration of 10 bar-truss was revisited to illustrate the mechanism of neural networks and deep learning. Several benchmark problems of 2D and 3D truss structures were used to verify the reliability of the present approach, and its extension to other engineering structures is straightforward. To enhance computational efficiency, a constant sum technique was proposed to generate data for the input of multi-similar variables. Both displacement and stress enforcements were the constraints of the optimized problem. The optimization data for cross sections with the objective function of total weight were then employed in the context of deep learning. The stochastic gradient descent (SGD) with Nesterov's accelerated gradient (NAG), root mean square propagation (RMSProp) and adaptive moment estimation (Adam) optimizers were compared in terms of convergence. In addition, this paper devised Chebyshev polynomials for a new approach to activation functions in single-layer neural networks. As expected, its convergence was quicker than the popular learning functions, especially in a short training with a small number of epochs for tested problems. Finally, a split data technique for linear regression was proposed to deal with some sensitive data.
我们研究了一种基于深度学习的新型计算结构优化方法。在使用算法求解结构的刚度公式后,我们利用其改进来优化结构计算。通过重新研究一个标准的 10 杆桁架示例,说明了神经网络和深度学习的机制。我们还使用了几个二维和三维桁架结构的基准问题来验证本方法的可靠性,并且可以直接将其扩展到其他工程结构。为了提高计算效率,提出了一种常数和技术来生成多相似变量输入的数据。位移和应力都作为优化问题的约束条件。然后,将具有总重量目标函数的横截面的优化数据用于深度学习中。在收敛性方面,比较了随机梯度下降(SGD)与 Nesterov 加速梯度(NAG)、均方根传播(RMSProp)和自适应矩估计(Adam)优化器。此外,本文还设计了 Chebyshev 多项式,用于在单层神经网络中引入新的激活函数方法。如预期的那样,与流行的学习函数相比,其收敛速度更快,尤其是在测试问题的小批量训练和少量的 epoch 中。最后,提出了一种用于线性回归的分割数据技术,以处理一些敏感数据。