This is done using the Jet API programmatically from your Java application. Jet is meant to be used operationally with developed and deployed applications.
Jet jobs run in an isolated class loader, which is distributed to the cluster when the Job is started. You do this by adding classes/jars to JobConfig. See http://docs.hazelcast.org/docs/jet/0.6/manual/#practical- considerations for details.
If I already have a Hadoop cluster, which can run a spark job in a jar file on HDFS with spark-submit, how can I install Hazelcast Jet so that I can do the same as with Spark?
We can use HDFS as a source or a sink. See https://github.com/hazelcast/hazelcast-jet-code-samples/blob... for a HDFS Wordcount example.
Jet jobs run in an isolated class loader, which is distributed to the cluster when the Job is started. You do this by adding classes/jars to JobConfig. See http://docs.hazelcast.org/docs/jet/0.6/manual/#practical- considerations for details.