Igor Gaponenko [3:29 PM] @channel I’m not sure where should I post this complain, in this forum or in #dm-infrastructure. A problem is that the only filesystem we have in the PDAC *master* node *lsst-qserv-master01* has really horrible I/O performance. I wouldn’t worry much about it unless the very same file system was not shared by the OS and *Qserv*‘s MySQL/MariaDB database server. This setup bites us in two ways. Firstly we’re using this file system (via the database server) to store intermediate results reported by *worker* nodes before doing the result set aggregation. In some cases the result sets could be rather large (a few *GB* per query). Secondly, the database service provides a number of key catalogs, some of which could be rather large (like the so called *secondary index*). The current disk subsystem of the node is just no match to those tasks. For example, when I’m scanning one of the *secondary index* (just to count the number of entries) then I’m seeing: ```iostat -m 1
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 363.00 13.31 0.05 13 0 dm-0 364.00 13.31 0.05 13 0 ``` *NOTE* how low is *BOTH* the CPU utilization and the disk I/O (for both IOPS and MB/s) . This looks just horrible. s there any chance we could add thw second file system based on 4 SSD in the RAID10 (0+1) configuration? That should’t be super expensive. Four 0.5 TB disks would cost a couple of thousand. And this must be be the software-based RAID to allow the TRIM-ing (if that’s still a problem for the newest SSD disks). If we could put the NVMe disk then it would be even better.