3d cfd model has 5-10m cells.
each cell can solve 7 equations per iteration
cfd solvers are iterative solvers, needs 3-4k iterations
one CPU can handle up to 0.5m cells
parallel computing - which is better: multicore, smp (large shared memory for structual/thermal apps), cluster (massively parallel mpp; high speed connected hosts for cfd apps) or grid (slow speed connected hosts)
fluent 4gb/core
16 cores: 2 serial + 14 parallel
4 cores: 1 serial + 3 parallel
cfx 4gb/core
8 cores: 1 serial + 7 parallel
ls-dyna 4gb/core
4 cores:
abacus 4gb/core
4 cores
ansys 4gb/core
8 cores
mscnastran/nxnastran 4gb/core
4 cores
suggested config
cpu: 4 nodes x 2 cpu per node x 4 cores per cpu
ram: 4 gb per core = 16 gb per cpu = 32 gb per node
costing:
nodes: USD 33500 for 4 nodes, (is it inclusive of rack, cabling etc)
infiniband: USB 14500 for 4 nodes
quadrics: na
storage: lustre, san,
mgmt: scheduler, ssi, failover job restart, ganglia, nagios, big brother
scheduler: lsf USD 15000
support: next business day included, 24*7 4-hour USD 400
considerations: application's system requirements, sys admin hassle (least possible units, user mgmt : active directory, ldap), end user ease of use, cost effective/long shelf life/sustained ROI, training/support from vendors,
cost to consider: ramp up time, licensing cost for different hw architectures
scalability: most os allow 32 to 64 partitions in one partition, for openmp limit is 8 to 16 processes.
benchmarking: pallas
what about license server? flexlm
do we have some benchmark results?
virtual lsf master with failover to smp host
common lsf config dir [on nas]
wall clock queues (short, medium, unlimited)
app starts [wrappers to lsf]
multiple versions of apps
ad integration
licensing using lsf resources [existing flexlm license server]
elim dynamic resource update based on actual license server usage
move job data to local stratch at dispatch
user machines are standard win32/64
todo
hardware recommendations
parallel file system design
test server/dev env hardware bom
implementation support plan
handoff
move to preppost licenses [to stnadardize/separate out solver licenses][provides a means to keep working while solve is running]
add mechhps licenses [permits more than 2 procs per job]
add hpc solvers support to all ansys users [3-4 nodes]
is os on different nodes (exec, master, license etc) decided?
does bom include rack cost?
does bom include nw equipment inside rack cost?
Sunday, March 29, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment