You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Infastructure meetings take place every other Thurs. at 9:00 Pacific on the BlueJeans infrastructure-meeting channel: https://bluejeans.com/383721668

Date

Attendees


Goals

  • Ensure successful use of the current NCSA infrastructure
  • Plan for near- and medium-term activities

Discussion items

ItemWhoNotes
Review of last meeting notes & action items
  • Any updates



lsst-dev i/o performance issues

  • See attachments below




Role of Nebula in prototyping
PDAC StatusGregory Dubois-Felsmann
Topics for next meeting



Action items

Please enter action items in the form

Responsible Person, Due Date, Description



Attachments

SLAC conversation re. lsst-dev i/o perfomance problems

Igor Gaponenko [3:29 PM]
@channel I’m not sure where should I post this complain, in this forum or in #dm-infrastructure. A problem is that the only filesystem we have in the PDAC *master* node *lsst-qserv-master01* has really horrible I/O performance. I wouldn’t worry much about it unless the very same file system was not shared by the OS and *Qserv*‘s MySQL/MariaDB database server. This setup bites us in two ways. Firstly we’re using this file system (via the database server) to store intermediate results reported by *worker* nodes before doing the result set aggregation. In some cases the result sets could be rather large (a few *GB* per query). Secondly, the database service provides a number of key catalogs, some of which could be rather large (like the so called *secondary index*). The current disk subsystem of the node is just no match to those tasks. For example, when I’m scanning one of the *secondary index* (just to count the number of entries) then I’m seeing:
```iostat -m 1

avg-cpu: %user %nice %system %iowait %steal %idle
0.60 0.00 0.12 0.07 0.00 99.20

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 437.00 14.94 0.01 14 0
dm-0 437.00 14.70 0.01 14 0

avg-cpu: %user %nice %system %iowait %steal %idle
0.40 0.00 0.05 0.27 0.00 99.28

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 363.00 13.31 0.05 13 0
dm-0 364.00 13.31 0.05 13 0
```
*NOTE* how low is *BOTH* the CPU utilization and the disk I/O (for both IOPS and MB/s) . This looks just horrible. s there any chance we could add thw second file system based on 4 SSD in the RAID10 (0+1) configuration? That should’t be super expensive. Four 0.5 TB disks would cost a couple of thousand. And this must be be the software-based RAID to allow the TRIM-ing (if that’s still a problem for the newest SSD disks). If we could put the NVMe disk then it would be even better.

  • No labels