Skip to main content

Query Scheduler Job

IOMETE provides Query Scheduler Job to run your queries over warehouse. You can run your queries on schedule time or manually. To enable job follow the next steps:


  • Go to Spark Jobs.
  • Click on Create New.

Specify the following parameters (these are examples, you can change them based on your preference):

  • Name: query-scheduler-job
  • Schedule: 0 0/22 1/1 * *
  • Docker Image: iomete/query_scheduler_job:0.3.0
  • Main application file: local:///app/
  • Environment Variables: LOG_LEVEL: INFO
  • Config file:
# Queries to be run sequentially
# let's create an example database

# use the newly created database to run the further queries within this database

# query example one
CREATE TABLE IF NOT EXISTS dept_manager_proxy
USING org.apache.spark.sql.jdbc
url "jdbc:mysql://",
dbtable "employees.dept_manager",
driver 'com.mysql.cj.jdbc.Driver',
user 'tutorial_user',
password '9tVDVEKp'

# another query that depends on the previous query result
CREATE TABLE IF NOT EXISTS dept_manager AS SELECT * FROM dept_manager_proxy


You can find source code in Github. Feel free to customize code for your requirements. Please do not hesitate to contact us if you have any question

Create Spark Job

Create Spark Job - Environment Variables

Create Spark Job - Application Config

And, hit the create button.

The job will be run based on the defined schedule. But, you can trigger the job manually by clicking on the Run button.

Run Job Manually


  • You can find source code of Query Scheduler Jon in github. View in Github