Skip to main content

Query Scheduler Job

IOMETE provides Query Scheduler Job to run your queries over warehouse. You can run your queries on schedule time or manually. To enable job follow the next steps:


  • In the left sidebar menu choose Spark Jobs
  • Click on Create
IOMETE Spark Jobs

Specify the following parameters (these are examples, you can change them based on your preference):

  • Name: query-scheduler-job
  • Schedule: 0 0/22 1/1 * *
  • Docker image: iomete/query_scheduler_job:0.3.0
  • Main application file: local:///app/
  • Environment variables: LOG_LEVEL: INFO
IOMETE Spark Jobs Create
  • Config file: Scroll down and expand Application configurations section and click Add config file

    IOMETE Spark Jobs add config file
# Queries to be run sequentially
# let's create an example database

# use the newly created database to run the further queries within this database

# query example one
CREATE TABLE IF NOT EXISTS dept_manager_proxy
USING org.apache.spark.sql.jdbc
url "jdbc:mysql://",
dbtable "employees.dept_manager",
driver 'com.mysql.cj.jdbc.Driver',
user 'tutorial_user',
password '9tVDVEKp'

# another query that depends on the previous query result
CREATE TABLE IF NOT EXISTS dept_manager AS SELECT * FROM dept_manager_proxy
IOMETE Spark Jobs add config file

And, hit the create button.


You can find source code in Github. Feel free to customize code for your requirements. Please do not hesitate to contact us if you have any question

The job will be run based on the defined schedule. But, you can trigger the job manually by clicking on the Run button.

IOMETE Run Job Manually