(Meeting canceled since it conflicts with the remote all hands. We'll pick up the discussion on . If there's anything in the agenda below that requires discussion before that, let Unknown User (pdomagala) know.)
Infrastructure meetings take place every other Thurs. at 9:00 Pacific on the BlueJeans infrastructure-meeting channel: https://bluejeans.com/383721668
Date
Forms - Checkbox button group
0_label
Michelle Butler
3_value
Igor Gaponenko
description
2_selected
true
5_label
Brian Van Klaveren
required
false
8_value
Fritz Mueller
1_selected
false
4_selected
true
7_selected
true
9_label
John Swinbank
2_label
Gregory Dubois-Felsmann
5_value
Brian Van Klaveren
cssClass
0_value
Michelle Butler
4_label
Fabio Hernandez
7_value
Kian-Tat Lim
labelLength
10%
7_label
Kian-Tat Lim
2_value
Gregory Dubois-Felsmann
9_selected
true
6_label
Simon Krughoff
9_value
John Swinbank
cssStyle
4_value
Fabio Hernandez
1_label
Paul Domagala
3_selected
true
counter
10
label
Attendees
6_selected
true
multiselect
false
3_label
Igor Gaponenko
6_value
Simon Krughoff
8_label
Fritz Mueller
name
Attendees
1_value
Paul Domagala
8_selected
true
0_selected
true
5_selected
true
Goals
Alignment of NCSA-provided services with program needs
Ensure effective use of the current NCSA infrastructure
Refinement and continuous improvement of services, resources and processes
cheller [7:42 PM] So, I did some digging about the LSST compute cluster. One, it appears there are too many users thrashing it at one time and the storage can't keep up seeing that it is just one node, too many requests over too small a pipe. Two, there's a local SSD for home, but only some people are using it... It would be far better to get people to use local scratch space for compiles and heavy duty stuff vs GPFS. The people on the local storage are, of course not be effected by these data storms the users are creating. I'd suggest adding more people to home from that SSD and potentially adding more local fast scratch space if this is going to continue to be an issue. Thanks.