Hardware that is based on parallel computing architecture has recently been gaining increasing popularity in high performance computing.
The efficiency of parallel processing hardware in engineering problem solving such as the computer simulation of physical processes is not directly dependent on the number of processors: four CPU cores do not in fact provide a fourfold speed increase in solving complex engineering problems over one CPU core. Similarly, the transfer of computation to graphics cards with hundreds of cores cannot provide a hundredfold increase in speed.
First of all, parallel computation acceleration is limited by computational algorithms; running algorithms with a low degree of parallelization on supercomputers and high-performance workstations is irrational. The notion of "efficiency of parallelization" is explained by Amdahl's law, according to which if at least 1/10 of the program is executed sequentially, then the acceleration cannot be increased beyond 10 times the original speed regardless the number of cores employed.
Telling examples of the limited effectiveness of algorithm parallelization for solving engineering problems are provided in the relatively weak results of worldwide leaders in computer-aided engineering (CAE) software - Abaqus and Ansys.
Continue reading →
Following the release of the article “Thermal analysis of a lengthy section of a gas pipeline on permafrost”, we received lots of questions from users.
In this post, we cover the more frequently asked questions concerning the functionality of the updated version of Frost 3D Universal software. Firstly, however, we would like to remind readers that the new version of the software was released in May, 2014. Here, we implemented new technologies in the architecture of the software and its main components, which enables the calculation of computational meshes as large as 100 million nodes on a PC. To demonstrate the extraordinary performance of the newest version of Frost 3D Universal, we conducted the thermal analysis of a long section of pipeline lying on permafrost, with a mesh consisting of 58.5 million nodes.
Question: Why do we need such large computational meshes?
Answer: The necessity for such large quantities of computational mesh nodes derives from the following factors:
1) The computation of extensive regions and long or massive objects often involves many elements for discretization in the computational domain.
2) There are often relatively small elements in the computational domain; there could, for example, be a thin layer of heat insulation, or ground strata. A significant increase in mesh refinement is required to discretize these relatively miniscule elements.
3) Areas with significant temperature gradients (near heat insulators, heat sources, cooling devices, etc.) require increased computational mesh density, consequently significantly increasing the total amount of nodes in the computational domain.
Note that even with the use of a non-uniform cell size (at irregular computational mesh), for example, we still need a lot of nodes. The increase in the cell size in irregular computational meshes needs to be very smooth; otherwise, the numerical method returns significantly less accurate results.
Continue reading →