- 1、自研调度组件,移除quartz依赖:一方面是为了精简系统降低冗余依赖,另一方面是为了提供系统的可控度与稳定性;

- 触发:单节点周期性触发,运行事件如delayqueue;
    - 调度:集群竞争,负载方式协同处理,锁竞争-更新触发信息-推送时间轮-锁释放-锁竞争;
- 2、底层表结构重构:移除11张quartz相关表,并对现有表结构优化梳理;
- 3、底层线程模型重构:移除Quartz线程池,降低系统线程与内存开销;
This commit is contained in:
xuxueli 2019-05-22 12:01:48 +08:00
parent 9353bdcbcf
commit e01d2bc9b5
35 changed files with 2770 additions and 1130 deletions

View File

@ -52,7 +52,7 @@ XXL-JOB是一个轻量级分布式任务调度平台其核心设计目标是
## Features
- 1、简单支持通过Web页面对任务进行CRUD操作操作简单一分钟上手
- 2、动态支持动态修改任务状态、启动/停止任务,以及终止运行中任务,即时生效;
- 3、调度中心HA中心式调度采用中心式设计“调度中心”基于集群Quartz实现并支持集群部署可保证调度中心HA
- 3、调度中心HA中心式调度采用中心式设计“调度中心”自研调度组件并支持集群部署可保证调度中心HA
- 4、执行器HA分布式任务分布式执行任务"执行器"支持集群部署可保证任务执行HA
- 5、注册中心: 执行器会周期性自动注册任务, 调度中心将会自动发现注册的任务并触发执行。同时,也支持手动录入执行器地址;
- 6、弹性扩容缩容一旦有新执行器机器上线或者下线下次调度时将会重新分配任务

View File

@ -20,7 +20,7 @@ XXL-JOB是一个轻量级分布式任务调度平台其核心设计目标是
### 1.3 特性
- 1、简单支持通过Web页面对任务进行CRUD操作操作简单一分钟上手
- 2、动态支持动态修改任务状态、启动/停止任务,以及终止运行中任务,即时生效;
- 3、调度中心HA中心式调度采用中心式设计“调度中心”基于集群Quartz实现并支持集群部署可保证调度中心HA
- 3、调度中心HA中心式调度采用中心式设计“调度中心”自研调度组件并支持集群部署可保证调度中心HA
- 4、执行器HA分布式任务分布式执行任务"执行器"支持集群部署可保证任务执行HA
- 5、注册中心: 执行器会周期性自动注册任务, 调度中心将会自动发现注册的任务并触发执行。同时,也支持手动录入执行器地址;
- 6、弹性扩容缩容一旦有新执行器机器上线或者下线下次调度时将会重新分配任务
@ -776,18 +776,16 @@ try{
- /xxl-job-executor-samples :执行器Sample示例项目大家可以在该项目上进行开发也可以将现有项目改造生成执行器项目
### 5.2 “调度数据库”配置
XXL-JOB调度模块基于Quartz集群实现其“调度数据库”是在Quartz的11张集群mysql表基础上扩展而成。
XXL-JOB调度模块基于自研调度组件并支持集群部署,调度数据库表说明如下:
XXL-JOB首先定制了Quartz原生表结构前缀XXL_JOB_QRTZ_
- XXL_JOB_LOCK任务调度锁表
- XXL_JOB_GROUP执行器信息表维护任务执行器信息
- XXL_JOB_INFO调度扩展信息表 用于保存XXL-JOB调度任务的扩展信息如任务分组、任务名、机器地址、执行器、执行入参和报警邮件等等
- XXL_JOB_LOG调度日志表 用于保存XXL-JOB任务调度的历史信息如调度结果、执行结果、调度入参、调度机器和执行器等等
- XXL_JOB_LOGGLUE任务GLUE日志用于保存GLUE更新历史用于支持GLUE的版本回溯功能
- XXL_JOB_REGISTRY执行器注册表维护在线的执行器和调度中心机器地址信息
- XXL_JOB_USER系统用户表
然后,在此基础上新增了几张张扩展表,如下:
- XXL_JOB_QRTZ_TRIGGER_GROUP执行器信息表维护任务执行器信息
- XXL_JOB_QRTZ_TRIGGER_REGISTRY执行器注册表维护在线的执行器和调度中心机器地址信息
- XXL_JOB_QRTZ_TRIGGER_INFO调度扩展信息表 用于保存XXL-JOB调度任务的扩展信息如任务分组、任务名、机器地址、执行器、执行入参和报警邮件等等
- XXL_JOB_QRTZ_TRIGGER_LOG调度日志表 用于保存XXL-JOB任务调度的历史信息如调度结果、执行结果、调度入参、调度机器和执行器等等
- XXL_JOB_QRTZ_TRIGGER_LOGGLUE任务GLUE日志用于保存GLUE更新历史用于支持GLUE的版本回溯功能
因此XXL-JOB调度数据库共计用于16张数据库表。
### 5.3 架构设计
#### 5.3.1 设计思想
@ -820,58 +818,30 @@ Quartz作为开源作业调度中的佼佼者是作业调度的首选。但
XXL-JOB弥补了quartz的上述不足之处。
#### 5.4.2 RemoteHttpJobBean
常规Quartz的开发任务逻辑一般维护在QuartzJobBean中耦合很严重。XXL-JOB中“调度模块”和“任务模块”完全解耦调度模块中的所有调度任务使用同一个QuartzJobBean即RemoteHttpJobBean。不同的调度任务将各自参数维护在各自扩展表数据中当触发RemoteHttpJobBean执行时将会解析不同的任务参数发起远程调用调用各自的远程执行器服务。
#### 5.4.2 自研调度模块
XXL-JOB最终选择自研调度组件早期调度组件基于Quartz一方面是为了精简系统降低冗余依赖另一方面是为了提供系统的可控度与稳定性
这种调用模型类似RPC调用RemoteHttpJobBean提供调用代理的功能,而执行器提供远程服务的功能。
XXL-JOB中“调度模块”和“任务模块”完全解耦调度模块进行任务调度时将会解析不同的任务参数发起远程调用调用各自的远程执行器服务。这种调用模型类似RPC调用调度中心提供调用代理的功能,而执行器提供远程服务的功能。
#### 5.4.3 调度中心HA集群
基于Quartz的集群方案数据库选用Mysql集群分布式并发环境中使用QUARTZ定时任务调度会在各个节点会上报任务存到数据库中执行时会从数据库中取出触发器来执行如果触发器的名称和执行时间相同则只有一个节点去执行此任务。
```
# for cluster
org.quartz.jobStore.tablePrefix = XXL_JOB_QRTZ_
org.quartz.scheduler.instanceId: AUTO
org.quartz.jobStore.class: org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.isClustered: true
org.quartz.jobStore.clusterCheckinInterval: 1000
```
基于数据库的集群方案数据库选用Mysql集群分布式并发环境中进行定时任务调度时会在各个节点会上报任务存到数据库中执行时会从数据库中取出触发器来执行如果触发器的名称和执行时间相同则只有一个节点去执行此任务。
#### 5.4.4 调度线程池
调度采用线程池方式实现,避免单线程因阻塞而引起任务调度延迟。
```
org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount: 50
org.quartz.threadPool.threadPriority: 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread: true
```
#### 5.4.5 @DisallowConcurrentExecution
XXL-JOB调度模块的“调度中心”默认不使用该注解即默认开启并行机制因为RemoteHttpJobBean为公共QuartzJobBean这样在多线程调度的情况下,调度模块被阻塞的几率很低,大大提高了调度系统的承载量。
#### 5.4.5 并行调度
XXL-JOB调度模块默认采用并行机制在多线程调度的情况下调度模块被阻塞的几率很低大大提高了调度系统的承载量。
XXL-JOB的每个调度任务虽然在调度模块是并行调度执行的但是任务调度传递到任务模块的“执行器”确实串行执行的同时支持任务终止。
#### 5.4.6 misfire
错过了触发时间,处理规则。
可能原因服务重启调度线程被QuartzJobBean阻塞线程被耗尽某个任务启用了@DisallowConcurrentExecution上次调度持续阻塞下次调度被错过
#### 5.4.6 过期处理策略
任务调度错过了触发时间:
- 可能原因:服务重启;调度线程被阻塞,线程被耗尽;上次调度持续阻塞,下次调度被错过;
- 处理策略:
- 过期5s内立即触发一次并计算下次触发时间
- 过期超过5s忽略过期触发计算下次触发时间
quartz.properties中关于misfire的阀值配置如下单位毫秒
```
org.quartz.jobStore.misfireThreshold: 60000
```
Misfire规则
withMisfireHandlingInstructionDoNothing不触发立即执行等待下次调度
withMisfireHandlingInstructionIgnoreMisfires以错过的第一个频率时间立刻开始执行
withMisfireHandlingInstructionFireAndProceed以当前时间为触发频率立刻触发一次执行
XXL-JOB默认misfire规则为withMisfireHandlingInstructionDoNothing
```
CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule(jobInfo.getJobCron()).withMisfireHandlingInstructionDoNothing();
CronTrigger cronTrigger = TriggerBuilder.newTrigger().withIdentity(triggerKey).withSchedule(cronScheduleBuilder).build();
```
#### 5.4.7 日志回调服务
调度模块的“调度中心”作为Web服务部署时一方面承担调度中心功能另一方面也为执行器提供API服务。
@ -926,7 +896,7 @@ xxl-job-admin#com.xxl.job.admin.controller.JobApiController.callback
#### 5.4.11 全异步化 & 轻量级
- 全异步化设计XXL-JOB系统中业务逻辑在远程执行器执行触发流程全异步化设计。相比直接在quartz的QuartzJobBean中执行业务逻辑,极大的降低了调度线程占用时间;
- 全异步化设计XXL-JOB系统中业务逻辑在远程执行器执行触发流程全异步化设计。相比直接在调度中心内部执行业务逻辑,极大的降低了调度线程占用时间;
- 异步调度:调度中心每次任务触发时仅发送一次调度请求,该调度请求首先推送“异步调度队列”,然后异步推送给远程执行器
- 异步执行:执行器会将请求存入“异步执行队列”并且立即响应调度中心,异步运行。
- 轻量级设计XXL-JOB调度中心中每个JOB逻辑非常 “轻”在全异步化的基础上单个JOB一次运行平均耗时基本在 "10ms" 之内基本为一次请求的网络开销因此可以保证使用有限的线程支撑大量的JOB并发运行
@ -988,7 +958,7 @@ XXL-JOB会为每次调度请求生成一个单独的日志文件需要通过
自v1.5版本之后, 任务取消了"任务执行机器"属性, 改为通过任务注册和自动发现的方式, 动态获取远程执行器地址并执行。
AppName: 每个执行器机器集群的唯一标示, 任务注册以 "执行器" 为最小粒度进行注册; 每个任务通过其绑定的执行器可感知对应的执行器机器列表;
注册表: 见"XXL_JOB_QRTZ_TRIGGER_REGISTRY"表, "执行器" 在进行任务注册时将会周期性维护一条注册记录即机器地址和AppName的绑定关系; "调度中心" 从而可以动态感知每个AppName在线的机器列表;
注册表: 见"XXL_JOB_REGISTRY"表, "执行器" 在进行任务注册时将会周期性维护一条注册记录即机器地址和AppName的绑定关系; "调度中心" 从而可以动态感知每个AppName在线的机器列表;
执行器注册: 任务注册Beat周期默认30s; 执行器以一倍Beat进行执行器注册, 调度中心以一倍Beat进行动态任务发现; 注册信息的失效时间被三倍Beat;
执行器注册摘除:执行器销毁时,将会主动上报调度中心并摘除对应的执行器机器信息,提高心跳注册的实时性;
@ -1479,15 +1449,17 @@ Tips: 历史版本(V1.3.x)目前已经Release至稳定版本, 进入维护阶段
### 6.25 版本 v2.1.0 Release Notes[规划中]
- 1、[规划中] 移除quartz精简底层实现优化已知问题
- 1、自研调度组件移除quartz依赖一方面是为了精简系统降低冗余依赖另一方面是为了提供系统的可控度与稳定性
- 触发单节点周期性触发运行事件如delayqueue
- 调度:集群竞争,负载方式协同处理,竞争-加入时间轮-释放-竞争;
- 2、用户管理支持在线管理系统用户存在管理员、普通用户两种角色
- 3、权限管理执行器维度进行权限控制管理员拥有全量权限普通用户需要分配执行器权限后才允许相关操作
- 4、调度日志优化支持设置日志保留天数过期日志天维度记录报表并清理调度报表汇总实时数据和报表
- 5、调度线程池参数调优
- 6、升级xxl-rpc至较新版本并清理冗余POM
- 7、注册表索引优化缓解锁表问题
- 调度:集群竞争,负载方式协同处理,锁竞争-更新触发信息-推送时间轮-锁释放-锁竞争;
- 2、底层表结构重构移除11张quartz相关表并对现有表结构优化梳理
- 3、底层线程模型重构移除Quartz线程池降低系统线程与内存开销
- 4、用户管理支持在线管理系统用户存在管理员、普通用户两种角色
- 5、权限管理执行器维度进行权限控制管理员拥有全量权限普通用户需要分配执行器权限后才允许相关操作
- 6、调度日志优化支持设置日志保留天数过期日志天维度记录报表并清理调度报表汇总实时数据和报表
- 7、调度线程池参数调优
- 8、升级xxl-rpc至较新版本并清理冗余POM
- 9、注册表索引优化缓解锁表问题
### TODO LIST

View File

@ -1,168 +0,0 @@
#
# Quartz seems to work best with the driver mm.mysql-2.0.7-bin.jar
#
# PLEASE consider using mysql with innodb tables to avoid locking issues
#
# In your Quartz properties file, you'll need to set
# org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
#
DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS;
DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS;
DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE;
DROP TABLE IF EXISTS QRTZ_LOCKS;
DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS;
DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS;
DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS;
DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS;
DROP TABLE IF EXISTS QRTZ_TRIGGERS;
DROP TABLE IF EXISTS QRTZ_JOB_DETAILS;
DROP TABLE IF EXISTS QRTZ_CALENDARS;
CREATE TABLE QRTZ_JOB_DETAILS
(
SCHED_NAME VARCHAR(120) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
JOB_CLASS_NAME VARCHAR(250) NOT NULL,
IS_DURABLE VARCHAR(1) NOT NULL,
IS_NONCONCURRENT VARCHAR(1) NOT NULL,
IS_UPDATE_DATA VARCHAR(1) NOT NULL,
REQUESTS_RECOVERY VARCHAR(1) NOT NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
);
CREATE TABLE QRTZ_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
NEXT_FIRE_TIME BIGINT(13) NULL,
PREV_FIRE_TIME BIGINT(13) NULL,
PRIORITY INTEGER NULL,
TRIGGER_STATE VARCHAR(16) NOT NULL,
TRIGGER_TYPE VARCHAR(8) NOT NULL,
START_TIME BIGINT(13) NOT NULL,
END_TIME BIGINT(13) NULL,
CALENDAR_NAME VARCHAR(200) NULL,
MISFIRE_INSTR SMALLINT(2) NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
REFERENCES QRTZ_JOB_DETAILS(SCHED_NAME,JOB_NAME,JOB_GROUP)
);
CREATE TABLE QRTZ_SIMPLE_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
REPEAT_COUNT BIGINT(7) NOT NULL,
REPEAT_INTERVAL BIGINT(12) NOT NULL,
TIMES_TRIGGERED BIGINT(10) NOT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE QRTZ_CRON_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
CRON_EXPRESSION VARCHAR(200) NOT NULL,
TIME_ZONE_ID VARCHAR(80),
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE QRTZ_SIMPROP_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
STR_PROP_1 VARCHAR(512) NULL,
STR_PROP_2 VARCHAR(512) NULL,
STR_PROP_3 VARCHAR(512) NULL,
INT_PROP_1 INT NULL,
INT_PROP_2 INT NULL,
LONG_PROP_1 BIGINT NULL,
LONG_PROP_2 BIGINT NULL,
DEC_PROP_1 NUMERIC(13,4) NULL,
DEC_PROP_2 NUMERIC(13,4) NULL,
BOOL_PROP_1 VARCHAR(1) NULL,
BOOL_PROP_2 VARCHAR(1) NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE QRTZ_BLOB_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
BLOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE QRTZ_CALENDARS
(
SCHED_NAME VARCHAR(120) NOT NULL,
CALENDAR_NAME VARCHAR(200) NOT NULL,
CALENDAR BLOB NOT NULL,
PRIMARY KEY (SCHED_NAME,CALENDAR_NAME)
);
CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_GROUP)
);
CREATE TABLE QRTZ_FIRED_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
ENTRY_ID VARCHAR(95) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
INSTANCE_NAME VARCHAR(200) NOT NULL,
FIRED_TIME BIGINT(13) NOT NULL,
SCHED_TIME BIGINT(13) NOT NULL,
PRIORITY INTEGER NOT NULL,
STATE VARCHAR(16) NOT NULL,
JOB_NAME VARCHAR(200) NULL,
JOB_GROUP VARCHAR(200) NULL,
IS_NONCONCURRENT VARCHAR(1) NULL,
REQUESTS_RECOVERY VARCHAR(1) NULL,
PRIMARY KEY (SCHED_NAME,ENTRY_ID)
);
CREATE TABLE QRTZ_SCHEDULER_STATE
(
SCHED_NAME VARCHAR(120) NOT NULL,
INSTANCE_NAME VARCHAR(200) NOT NULL,
LAST_CHECKIN_TIME BIGINT(13) NOT NULL,
CHECKIN_INTERVAL BIGINT(13) NOT NULL,
PRIMARY KEY (SCHED_NAME,INSTANCE_NAME)
);
CREATE TABLE QRTZ_LOCKS
(
SCHED_NAME VARCHAR(120) NOT NULL,
LOCK_NAME VARCHAR(40) NOT NULL,
PRIMARY KEY (SCHED_NAME,LOCK_NAME)
);
commit;

View File

@ -3,153 +3,7 @@ use `xxl-job`;
CREATE TABLE XXL_JOB_QRTZ_JOB_DETAILS
(
SCHED_NAME VARCHAR(120) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
JOB_CLASS_NAME VARCHAR(250) NOT NULL,
IS_DURABLE VARCHAR(1) NOT NULL,
IS_NONCONCURRENT VARCHAR(1) NOT NULL,
IS_UPDATE_DATA VARCHAR(1) NOT NULL,
REQUESTS_RECOVERY VARCHAR(1) NOT NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
JOB_NAME VARCHAR(200) NOT NULL,
JOB_GROUP VARCHAR(200) NOT NULL,
DESCRIPTION VARCHAR(250) NULL,
NEXT_FIRE_TIME BIGINT(13) NULL,
PREV_FIRE_TIME BIGINT(13) NULL,
PRIORITY INTEGER NULL,
TRIGGER_STATE VARCHAR(16) NOT NULL,
TRIGGER_TYPE VARCHAR(8) NOT NULL,
START_TIME BIGINT(13) NOT NULL,
END_TIME BIGINT(13) NULL,
CALENDAR_NAME VARCHAR(200) NULL,
MISFIRE_INSTR SMALLINT(2) NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
REFERENCES XXL_JOB_QRTZ_JOB_DETAILS(SCHED_NAME,JOB_NAME,JOB_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_SIMPLE_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
REPEAT_COUNT BIGINT(7) NOT NULL,
REPEAT_INTERVAL BIGINT(12) NOT NULL,
TIMES_TRIGGERED BIGINT(10) NOT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES XXL_JOB_QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_CRON_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
CRON_EXPRESSION VARCHAR(200) NOT NULL,
TIME_ZONE_ID VARCHAR(80),
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES XXL_JOB_QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_SIMPROP_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
STR_PROP_1 VARCHAR(512) NULL,
STR_PROP_2 VARCHAR(512) NULL,
STR_PROP_3 VARCHAR(512) NULL,
INT_PROP_1 INT NULL,
INT_PROP_2 INT NULL,
LONG_PROP_1 BIGINT NULL,
LONG_PROP_2 BIGINT NULL,
DEC_PROP_1 NUMERIC(13,4) NULL,
DEC_PROP_2 NUMERIC(13,4) NULL,
BOOL_PROP_1 VARCHAR(1) NULL,
BOOL_PROP_2 VARCHAR(1) NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES XXL_JOB_QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_BLOB_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
BLOB_DATA BLOB NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
REFERENCES XXL_JOB_QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_CALENDARS
(
SCHED_NAME VARCHAR(120) NOT NULL,
CALENDAR_NAME VARCHAR(200) NOT NULL,
CALENDAR BLOB NOT NULL,
PRIMARY KEY (SCHED_NAME,CALENDAR_NAME)
);
CREATE TABLE XXL_JOB_QRTZ_PAUSED_TRIGGER_GRPS
(
SCHED_NAME VARCHAR(120) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_GROUP)
);
CREATE TABLE XXL_JOB_QRTZ_FIRED_TRIGGERS
(
SCHED_NAME VARCHAR(120) NOT NULL,
ENTRY_ID VARCHAR(95) NOT NULL,
TRIGGER_NAME VARCHAR(200) NOT NULL,
TRIGGER_GROUP VARCHAR(200) NOT NULL,
INSTANCE_NAME VARCHAR(200) NOT NULL,
FIRED_TIME BIGINT(13) NOT NULL,
SCHED_TIME BIGINT(13) NOT NULL,
PRIORITY INTEGER NOT NULL,
STATE VARCHAR(16) NOT NULL,
JOB_NAME VARCHAR(200) NULL,
JOB_GROUP VARCHAR(200) NULL,
IS_NONCONCURRENT VARCHAR(1) NULL,
REQUESTS_RECOVERY VARCHAR(1) NULL,
PRIMARY KEY (SCHED_NAME,ENTRY_ID)
);
CREATE TABLE XXL_JOB_QRTZ_SCHEDULER_STATE
(
SCHED_NAME VARCHAR(120) NOT NULL,
INSTANCE_NAME VARCHAR(200) NOT NULL,
LAST_CHECKIN_TIME BIGINT(13) NOT NULL,
CHECKIN_INTERVAL BIGINT(13) NOT NULL,
PRIMARY KEY (SCHED_NAME,INSTANCE_NAME)
);
CREATE TABLE XXL_JOB_QRTZ_LOCKS
(
SCHED_NAME VARCHAR(120) NOT NULL,
LOCK_NAME VARCHAR(40) NOT NULL,
PRIMARY KEY (SCHED_NAME,LOCK_NAME)
);
CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_INFO` (
CREATE TABLE `XXL_JOB_INFO` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`job_group` int(11) NOT NULL COMMENT '执行器主键ID',
`job_cron` varchar(128) NOT NULL COMMENT '任务执行CRON',
@ -169,10 +23,13 @@ CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_INFO` (
`glue_remark` varchar(128) DEFAULT NULL COMMENT 'GLUE备注',
`glue_updatetime` datetime DEFAULT NULL COMMENT 'GLUE更新时间',
`child_jobid` varchar(255) DEFAULT NULL COMMENT '子任务ID多个逗号分隔',
`trigger_status` tinyint(4) NOT NULL DEFAULT '0' COMMENT '调度状态0-停止1-运行',
`trigger_last_time` bigint(13) NOT NULL DEFAULT '0' COMMENT '上次调度时间',
`trigger_next_time` bigint(13) NOT NULL DEFAULT '0' COMMENT '下次调度时间',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_LOG` (
CREATE TABLE `XXL_JOB_LOG` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`job_group` int(11) NOT NULL COMMENT '执行器主键ID',
`job_id` int(11) NOT NULL COMMENT '任务主键ID',
@ -193,7 +50,7 @@ CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_LOG` (
KEY `I_handle_code` (`handle_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_LOGGLUE` (
CREATE TABLE `XXL_JOB_LOGGLUE` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`job_id` int(11) NOT NULL COMMENT '任务主键ID',
`glue_type` varchar(50) DEFAULT NULL COMMENT 'GLUE类型',
@ -204,7 +61,7 @@ CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_LOGGLUE` (
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_REGISTRY` (
CREATE TABLE `XXL_JOB_REGISTRY` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`registry_group` varchar(255) NOT NULL,
`registry_key` varchar(255) NOT NULL,
@ -214,7 +71,7 @@ CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_REGISTRY` (
KEY `i_g_k_v` (`registry_group`,`registry_key`,`registry_value`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_GROUP` (
CREATE TABLE `XXL_JOB_GROUP` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`app_name` varchar(64) NOT NULL COMMENT '执行器AppName',
`title` varchar(12) NOT NULL COMMENT '执行器名称',
@ -224,7 +81,7 @@ CREATE TABLE `XXL_JOB_QRTZ_TRIGGER_GROUP` (
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `XXL_JOB_QRTZ_USER` (
CREATE TABLE `XXL_JOB_USER` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`username` varchar(50) NOT NULL COMMENT '账号',
`password` varchar(50) NOT NULL COMMENT '密码',
@ -234,10 +91,16 @@ CREATE TABLE `XXL_JOB_QRTZ_USER` (
UNIQUE KEY `i_username` (`username`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `XXL_JOB_LOCK` (
`lock_name` varchar(50) NOT NULL COMMENT '锁名称',
PRIMARY KEY (`lock_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `XXL_JOB_QRTZ_TRIGGER_GROUP`(`id`, `app_name`, `title`, `order`, `address_type`, `address_list`) VALUES (1, 'xxl-job-executor-sample', '示例执行器', 1, 0, NULL);
INSERT INTO `XXL_JOB_QRTZ_TRIGGER_INFO`(`id`, `job_group`, `job_cron`, `job_desc`, `add_time`, `update_time`, `author`, `alarm_email`, `executor_route_strategy`, `executor_handler`, `executor_param`, `executor_block_strategy`, `executor_timeout`, `executor_fail_retry_count`, `glue_type`, `glue_source`, `glue_remark`, `glue_updatetime`, `child_jobid`) VALUES (1, 1, '0 0 0 * * ? *', '测试任务1', '2018-11-03 22:21:31', '2018-11-03 22:21:31', 'XXL', '', 'FIRST', 'demoJobHandler', '', 'SERIAL_EXECUTION', 0, 0, 'BEAN', '', 'GLUE代码初始化', '2018-11-03 22:21:31', '');
INSERT INTO `XXL_JOB_QRTZ_USER`(`id`, `username`, `password`, `role`, `permission`) VALUES (1, 'admin', 'e10adc3949ba59abbe56e057f20f883e', 1, NULL);
INSERT INTO `XXL_JOB_GROUP`(`id`, `app_name`, `title`, `order`, `address_type`, `address_list`) VALUES (1, 'xxl-job-executor-sample', '示例执行器', 1, 0, NULL);
INSERT INTO `XXL_JOB_INFO`(`id`, `job_group`, `job_cron`, `job_desc`, `add_time`, `update_time`, `author`, `alarm_email`, `executor_route_strategy`, `executor_handler`, `executor_param`, `executor_block_strategy`, `executor_timeout`, `executor_fail_retry_count`, `glue_type`, `glue_source`, `glue_remark`, `glue_updatetime`, `child_jobid`) VALUES (1, 1, '0 0 0 * * ? *', '测试任务1', '2018-11-03 22:21:31', '2018-11-03 22:21:31', 'XXL', '', 'FIRST', 'demoJobHandler', '', 'SERIAL_EXECUTION', 0, 0, 'BEAN', '', 'GLUE代码初始化', '2018-11-03 22:21:31', '');
INSERT INTO `XXL_JOB_USER`(`id`, `username`, `password`, `role`, `permission`) VALUES (1, 'admin', 'e10adc3949ba59abbe56e057f20f883e', 1, NULL);
INSERT INTO `XXL_JOB_LOCK` ( `lock_name`) VALUES ( 'schedule_lock');
commit;

View File

@ -37,7 +37,6 @@
<commons-exec.version>1.3</commons-exec.version>
<groovy.version>2.5.6</groovy.version>
<quartz.version>2.3.1</quartz.version>
<maven-source-plugin.version>3.0.1</maven-source-plugin.version>
<maven-javadoc-plugin.version>3.1.0</maven-javadoc-plugin.version>

View File

@ -60,13 +60,6 @@
<version>${mysql-connector-java.version}</version>
</dependency>
<!-- quartz quartz-2.2.3/c3p0-0.9.1.1/slf4j-api-1.6.6 -->
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz</artifactId>
<version>${quartz.version}</version>
</dependency>
<!-- xxl-job-core -->
<dependency>
<groupId>com.xuxueli</groupId>

View File

@ -1,7 +1,7 @@
package com.xxl.job.admin.controller;
import com.xxl.job.admin.controller.annotation.PermissionLimit;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import com.xxl.job.admin.core.conf.XxlJobScheduler;
import com.xxl.job.core.biz.AdminBiz;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.stereotype.Controller;
@ -27,7 +27,7 @@ public class JobApiController implements InitializingBean {
@RequestMapping(AdminBiz.MAPPING)
@PermissionLimit(limit=false)
public void api(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
XxlJobDynamicScheduler.invokeAdminService(request, response);
XxlJobScheduler.invokeAdminService(request, response);
}

View File

@ -1,10 +1,10 @@
package com.xxl.job.admin.controller;
import com.xxl.job.admin.core.conf.XxlJobScheduler;
import com.xxl.job.admin.core.exception.XxlJobException;
import com.xxl.job.admin.core.model.XxlJobGroup;
import com.xxl.job.admin.core.model.XxlJobInfo;
import com.xxl.job.admin.core.model.XxlJobLog;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.admin.dao.XxlJobGroupDao;
import com.xxl.job.admin.dao.XxlJobInfoDao;
@ -136,7 +136,7 @@ public class JobLogController {
@ResponseBody
public ReturnT<LogResult> logDetailCat(String executorAddress, long triggerTime, int logId, int fromLineNum){
try {
ExecutorBiz executorBiz = XxlJobDynamicScheduler.getExecutorBiz(executorAddress);
ExecutorBiz executorBiz = XxlJobScheduler.getExecutorBiz(executorAddress);
ReturnT<LogResult> logResult = executorBiz.log(triggerTime, logId, fromLineNum);
// is end
@ -170,7 +170,7 @@ public class JobLogController {
// request of kill
ReturnT<String> runResult = null;
try {
ExecutorBiz executorBiz = XxlJobDynamicScheduler.getExecutorBiz(log.getExecutorAddress());
ExecutorBiz executorBiz = XxlJobScheduler.getExecutorBiz(log.getExecutorAddress());
runResult = executorBiz.kill(jobInfo.getId());
} catch (Exception e) {
logger.error(e.getMessage(), e);

View File

@ -11,6 +11,7 @@ import org.springframework.context.annotation.Configuration;
import org.springframework.mail.javamail.JavaMailSender;
import javax.annotation.Resource;
import javax.sql.DataSource;
/**
* xxl-job config
@ -53,6 +54,8 @@ public class XxlJobAdminConfig implements InitializingBean{
private AdminBiz adminBiz;
@Resource
private JavaMailSender mailSender;
@Resource
private DataSource dataSource;
public String getI18n() {
@ -91,4 +94,8 @@ public class XxlJobAdminConfig implements InitializingBean{
return mailSender;
}
public DataSource getDataSource() {
return dataSource;
}
}

View File

@ -1,43 +0,0 @@
package com.xxl.job.admin.core.conf;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import org.quartz.Scheduler;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.ClassPathResource;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;
import javax.sql.DataSource;
/**
* @author xuxueli 2018-10-28 00:18:17
*/
@Configuration
public class XxlJobDynamicSchedulerConfig {
@Bean
public SchedulerFactoryBean getSchedulerFactoryBean(DataSource dataSource){
SchedulerFactoryBean schedulerFactory = new SchedulerFactoryBean();
schedulerFactory.setDataSource(dataSource);
schedulerFactory.setAutoStartup(true); // 自动启动
schedulerFactory.setStartupDelay(20); // 延时启动应用启动成功后在启动
schedulerFactory.setOverwriteExistingJobs(true); // 覆盖DB中JOBtrue以数据库中已经存在的为准false
schedulerFactory.setApplicationContextSchedulerContextKey("applicationContext");
schedulerFactory.setConfigLocation(new ClassPathResource("quartz.properties"));
return schedulerFactory;
}
@Bean(initMethod = "start", destroyMethod = "destroy")
public XxlJobDynamicScheduler getXxlJobDynamicScheduler(SchedulerFactoryBean schedulerFactory){
Scheduler scheduler = schedulerFactory.getScheduler();
XxlJobDynamicScheduler xxlJobDynamicScheduler = new XxlJobDynamicScheduler();
xxlJobDynamicScheduler.setScheduler(scheduler);
return xxlJobDynamicScheduler;
}
}

View File

@ -0,0 +1,147 @@
package com.xxl.job.admin.core.conf;
import com.xxl.job.admin.core.thread.JobFailMonitorHelper;
import com.xxl.job.admin.core.thread.JobRegistryMonitorHelper;
import com.xxl.job.admin.core.thread.JobScheduleHelper;
import com.xxl.job.admin.core.thread.JobTriggerPoolHelper;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.core.biz.AdminBiz;
import com.xxl.job.core.biz.ExecutorBiz;
import com.xxl.job.core.enums.ExecutorBlockStrategyEnum;
import com.xxl.rpc.remoting.invoker.XxlRpcInvokerFactory;
import com.xxl.rpc.remoting.invoker.call.CallType;
import com.xxl.rpc.remoting.invoker.reference.XxlRpcReferenceBean;
import com.xxl.rpc.remoting.invoker.route.LoadBalance;
import com.xxl.rpc.remoting.net.NetEnum;
import com.xxl.rpc.remoting.net.impl.servlet.server.ServletServerHandler;
import com.xxl.rpc.remoting.provider.XxlRpcProviderFactory;
import com.xxl.rpc.serialize.Serializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.context.annotation.Configuration;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.util.concurrent.ConcurrentHashMap;
/**
* @author xuxueli 2018-10-28 00:18:17
*/
@Configuration
public class XxlJobScheduler implements InitializingBean, DisposableBean {
private static final Logger logger = LoggerFactory.getLogger(XxlJobScheduler.class);
@Override
public void afterPropertiesSet() throws Exception {
// init i18n
initI18n();
// admin registry monitor run
JobRegistryMonitorHelper.getInstance().start();
// admin monitor run
JobFailMonitorHelper.getInstance().start();
// admin-server
initRpcProvider();
// start-schedule
JobScheduleHelper.getInstance().start();
logger.info(">>>>>>>>> init xxl-job admin success.");
}
@Override
public void destroy() throws Exception {
// stop-schedule
JobScheduleHelper.getInstance().toStop();
// admin trigger pool stop
JobTriggerPoolHelper.toStop();
// admin registry stop
JobRegistryMonitorHelper.getInstance().toStop();
// admin monitor stop
JobFailMonitorHelper.getInstance().toStop();
// admin-server
stopRpcProvider();
}
// ---------------------- I18n ----------------------
private void initI18n(){
for (ExecutorBlockStrategyEnum item:ExecutorBlockStrategyEnum.values()) {
item.setTitle(I18nUtil.getString("jobconf_block_".concat(item.name())));
}
}
// ---------------------- admin rpc provider (no server version) ----------------------
private static ServletServerHandler servletServerHandler;
private void initRpcProvider(){
// init
XxlRpcProviderFactory xxlRpcProviderFactory = new XxlRpcProviderFactory();
xxlRpcProviderFactory.initConfig(
NetEnum.NETTY_HTTP,
Serializer.SerializeEnum.HESSIAN.getSerializer(),
null,
0,
XxlJobAdminConfig.getAdminConfig().getAccessToken(),
null,
null);
// add services
xxlRpcProviderFactory.addService(AdminBiz.class.getName(), null, XxlJobAdminConfig.getAdminConfig().getAdminBiz());
// servlet handler
servletServerHandler = new ServletServerHandler(xxlRpcProviderFactory);
}
private void stopRpcProvider() throws Exception {
XxlRpcInvokerFactory.getInstance().stop();
}
public static void invokeAdminService(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
servletServerHandler.handle(null, request, response);
}
// ---------------------- executor-client ----------------------
private static ConcurrentHashMap<String, ExecutorBiz> executorBizRepository = new ConcurrentHashMap<String, ExecutorBiz>();
public static ExecutorBiz getExecutorBiz(String address) throws Exception {
// valid
if (address==null || address.trim().length()==0) {
return null;
}
// load-cache
address = address.trim();
ExecutorBiz executorBiz = executorBizRepository.get(address);
if (executorBiz != null) {
return executorBiz;
}
// set-cache
executorBiz = (ExecutorBiz) new XxlRpcReferenceBean(
NetEnum.NETTY_HTTP,
Serializer.SerializeEnum.HESSIAN.getSerializer(),
CallType.SYNC,
LoadBalance.ROUND,
ExecutorBiz.class,
null,
5000,
address,
XxlJobAdminConfig.getAdminConfig().getAccessToken(),
null,
null).getObject();
executorBizRepository.put(address, executorBiz);
return executorBiz;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,33 +0,0 @@
package com.xxl.job.admin.core.jobbean;
import com.xxl.job.admin.core.thread.JobTriggerPoolHelper;
import com.xxl.job.admin.core.trigger.TriggerTypeEnum;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.quartz.JobKey;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.quartz.QuartzJobBean;
/**
* http job bean
* @DisallowConcurrentExecution diable concurrent, thread size can not be only one, better given more
* @author xuxueli 2015-12-17 18:20:34
*/
//@DisallowConcurrentExecution
public class RemoteHttpJobBean extends QuartzJobBean {
private static Logger logger = LoggerFactory.getLogger(RemoteHttpJobBean.class);
@Override
protected void executeInternal(JobExecutionContext context)
throws JobExecutionException {
// load jobId
JobKey jobKey = context.getTrigger().getJobKey();
Integer jobId = Integer.valueOf(jobKey.getName());
// trigger
JobTriggerPoolHelper.trigger(jobId, TriggerTypeEnum.CRON, -1, null, null);
}
}

View File

@ -9,10 +9,10 @@ import java.util.Date;
*/
public class XxlJobInfo {
private int id; // 主键ID (JobKey.name)
private int id; // 主键ID
private int jobGroup; // 执行器主键ID (JobKey.group)
private String jobCron; // 任务执行CRON表达式 base on quartz
private int jobGroup; // 执行器主键ID
private String jobCron; // 任务执行CRON表达式
private String jobDesc;
private Date addTime;
@ -35,8 +35,9 @@ public class XxlJobInfo {
private String childJobId; // 子任务ID多个逗号分隔
// copy from quartz
private String jobStatus; // 任务状态 base on quartz
private int triggerStatus; // 调度状态0-停止1-运行
private long triggerLastTime; // 上次调度时间
private long triggerNextTime; // 下次调度时间
public int getId() {
@ -191,12 +192,27 @@ public class XxlJobInfo {
this.childJobId = childJobId;
}
public String getJobStatus() {
return jobStatus;
public int getTriggerStatus() {
return triggerStatus;
}
public void setJobStatus(String jobStatus) {
this.jobStatus = jobStatus;
public void setTriggerStatus(int triggerStatus) {
this.triggerStatus = triggerStatus;
}
public long getTriggerLastTime() {
return triggerLastTime;
}
public void setTriggerLastTime(long triggerLastTime) {
this.triggerLastTime = triggerLastTime;
}
public long getTriggerNextTime() {
return triggerNextTime;
}
public void setTriggerNextTime(long triggerNextTime) {
this.triggerNextTime = triggerNextTime;
}
}

View File

@ -0,0 +1,32 @@
//package com.xxl.job.admin.core.jobbean;
//
//import com.xxl.job.admin.core.thread.JobTriggerPoolHelper;
//import com.xxl.job.admin.core.trigger.TriggerTypeEnum;
//import org.quartz.JobExecutionContext;
//import org.quartz.JobExecutionException;
//import org.quartz.JobKey;
//import org.slf4j.Logger;
//import org.slf4j.LoggerFactory;
//import org.springframework.scheduling.quartz.QuartzJobBean;
//
///**
// * http job bean
// * @DisallowConcurrentExecution diable concurrent, thread size can not be only one, better given more
// * @author xuxueli 2015-12-17 18:20:34
// */
////@DisallowConcurrentExecution
//public class RemoteHttpJobBean extends QuartzJobBean {
// private static Logger logger = LoggerFactory.getLogger(RemoteHttpJobBean.class);
//
// @Override
// protected void executeInternal(JobExecutionContext context)
// throws JobExecutionException {
//
// // load jobId
// JobKey jobKey = context.getTrigger().getJobKey();
// Integer jobId = Integer.valueOf(jobKey.getName());
//
//
// }
//
//}

View File

@ -0,0 +1,413 @@
//package com.xxl.job.admin.core.schedule;
//
//import com.xxl.job.admin.core.conf.XxlJobAdminConfig;
//import com.xxl.job.admin.core.jobbean.RemoteHttpJobBean;
//import com.xxl.job.admin.core.model.XxlJobInfo;
//import com.xxl.job.admin.core.thread.JobFailMonitorHelper;
//import com.xxl.job.admin.core.thread.JobRegistryMonitorHelper;
//import com.xxl.job.admin.core.thread.JobTriggerPoolHelper;
//import com.xxl.job.admin.core.util.I18nUtil;
//import com.xxl.job.core.biz.AdminBiz;
//import com.xxl.job.core.biz.ExecutorBiz;
//import com.xxl.job.core.enums.ExecutorBlockStrategyEnum;
//import com.xxl.rpc.remoting.invoker.XxlRpcInvokerFactory;
//import com.xxl.rpc.remoting.invoker.call.CallType;
//import com.xxl.rpc.remoting.invoker.reference.XxlRpcReferenceBean;
//import com.xxl.rpc.remoting.invoker.route.LoadBalance;
//import com.xxl.rpc.remoting.net.NetEnum;
//import com.xxl.rpc.remoting.net.impl.servlet.server.ServletServerHandler;
//import com.xxl.rpc.remoting.provider.XxlRpcProviderFactory;
//import com.xxl.rpc.serialize.Serializer;
//import org.quartz.*;
//import org.quartz.Trigger.TriggerState;
//import org.quartz.impl.triggers.CronTriggerImpl;
//import org.slf4j.Logger;
//import org.slf4j.LoggerFactory;
//import org.springframework.util.Assert;
//
//import javax.servlet.ServletException;
//import javax.servlet.http.HttpServletRequest;
//import javax.servlet.http.HttpServletResponse;
//import java.io.IOException;
//import java.util.Date;
//import java.util.concurrent.ConcurrentHashMap;
//
///**
// * base quartz scheduler util
// * @author xuxueli 2015-12-19 16:13:53
// */
//public final class XxlJobDynamicScheduler {
// private static final Logger logger = LoggerFactory.getLogger(XxlJobDynamicScheduler_old.class);
//
// // ---------------------- param ----------------------
//
// // scheduler
// private static Scheduler scheduler;
// public void setScheduler(Scheduler scheduler) {
// XxlJobDynamicScheduler_old.scheduler = scheduler;
// }
//
//
// // ---------------------- init + destroy ----------------------
// public void start() throws Exception {
// // valid
// Assert.notNull(scheduler, "quartz scheduler is null");
//
// // init i18n
// initI18n();
//
// // admin registry monitor run
// JobRegistryMonitorHelper.getInstance().start();
//
// // admin monitor run
// JobFailMonitorHelper.getInstance().start();
//
// // admin-server
// initRpcProvider();
//
// logger.info(">>>>>>>>> init xxl-job admin success.");
// }
//
//
// public void destroy() throws Exception {
// // admin trigger pool stop
// JobTriggerPoolHelper.toStop();
//
// // admin registry stop
// JobRegistryMonitorHelper.getInstance().toStop();
//
// // admin monitor stop
// JobFailMonitorHelper.getInstance().toStop();
//
// // admin-server
// stopRpcProvider();
// }
//
//
// // ---------------------- I18n ----------------------
//
// private void initI18n(){
// for (ExecutorBlockStrategyEnum item:ExecutorBlockStrategyEnum.values()) {
// item.setTitle(I18nUtil.getString("jobconf_block_".concat(item.name())));
// }
// }
//
//
// // ---------------------- admin rpc provider (no server version) ----------------------
// private static ServletServerHandler servletServerHandler;
// private void initRpcProvider(){
// // init
// XxlRpcProviderFactory xxlRpcProviderFactory = new XxlRpcProviderFactory();
// xxlRpcProviderFactory.initConfig(
// NetEnum.NETTY_HTTP,
// Serializer.SerializeEnum.HESSIAN.getSerializer(),
// null,
// 0,
// XxlJobAdminConfig.getAdminConfig().getAccessToken(),
// null,
// null);
//
// // add services
// xxlRpcProviderFactory.addService(AdminBiz.class.getName(), null, XxlJobAdminConfig.getAdminConfig().getAdminBiz());
//
// // servlet handler
// servletServerHandler = new ServletServerHandler(xxlRpcProviderFactory);
// }
// private void stopRpcProvider() throws Exception {
// XxlRpcInvokerFactory.getInstance().stop();
// }
// public static void invokeAdminService(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
// servletServerHandler.handle(null, request, response);
// }
//
//
// // ---------------------- executor-client ----------------------
// private static ConcurrentHashMap<String, ExecutorBiz> executorBizRepository = new ConcurrentHashMap<String, ExecutorBiz>();
// public static ExecutorBiz getExecutorBiz(String address) throws Exception {
// // valid
// if (address==null || address.trim().length()==0) {
// return null;
// }
//
// // load-cache
// address = address.trim();
// ExecutorBiz executorBiz = executorBizRepository.get(address);
// if (executorBiz != null) {
// return executorBiz;
// }
//
// // set-cache
// executorBiz = (ExecutorBiz) new XxlRpcReferenceBean(
// NetEnum.NETTY_HTTP,
// Serializer.SerializeEnum.HESSIAN.getSerializer(),
// CallType.SYNC,
// LoadBalance.ROUND,
// ExecutorBiz.class,
// null,
// 5000,
// address,
// XxlJobAdminConfig.getAdminConfig().getAccessToken(),
// null,
// null).getObject();
//
// executorBizRepository.put(address, executorBiz);
// return executorBiz;
// }
//
//
// // ---------------------- schedule util ----------------------
//
// /**
// * fill job info
// *
// * @param jobInfo
// */
// public static void fillJobInfo(XxlJobInfo jobInfo) {
//
// String name = String.valueOf(jobInfo.getId());
//
// // trigger key
// TriggerKey triggerKey = TriggerKey.triggerKey(name);
// try {
//
// // trigger cron
// Trigger trigger = scheduler.getTrigger(triggerKey);
// if (trigger!=null && trigger instanceof CronTriggerImpl) {
// String cronExpression = ((CronTriggerImpl) trigger).getCronExpression();
// jobInfo.setJobCron(cronExpression);
// }
//
// // trigger state
// TriggerState triggerState = scheduler.getTriggerState(triggerKey);
// if (triggerState!=null) {
// jobInfo.setJobStatus(triggerState.name());
// }
//
// //JobKey jobKey = new JobKey(jobInfo.getJobName(), String.valueOf(jobInfo.getJobGroup()));
// //JobDetail jobDetail = scheduler.getJobDetail(jobKey);
// //String jobClass = jobDetail.getJobClass().getName();
//
// } catch (SchedulerException e) {
// logger.error(e.getMessage(), e);
// }
// }
//
//
// /**
// * add trigger + job
// *
// * @param jobName
// * @param cronExpression
// * @return
// * @throws SchedulerException
// */
// public static boolean addJob(String jobName, String cronExpression) throws SchedulerException {
// // 1job key
// TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
// JobKey jobKey = new JobKey(jobName);
//
// // 2valid
// if (scheduler.checkExists(triggerKey)) {
// return true; // PASS
// }
//
// // 3corn trigger
// CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule(cronExpression).withMisfireHandlingInstructionDoNothing(); // withMisfireHandlingInstructionDoNothing 忽略掉调度终止过程中忽略的调度
// CronTrigger cronTrigger = TriggerBuilder.newTrigger().withIdentity(triggerKey).withSchedule(cronScheduleBuilder).build();
//
// // 4job detail
// Class<? extends Job> jobClass_ = RemoteHttpJobBean.class; // Class.forName(jobInfo.getJobClass());
// JobDetail jobDetail = JobBuilder.newJob(jobClass_).withIdentity(jobKey).build();
//
// /*if (jobInfo.getJobData()!=null) {
// JobDataMap jobDataMap = jobDetail.getJobDataMap();
// jobDataMap.putAll(JacksonUtil.readValue(jobInfo.getJobData(), Map.class));
// // JobExecutionContext context.getMergedJobDataMap().get("mailGuid");
// }*/
//
// // 5schedule job
// Date date = scheduler.scheduleJob(jobDetail, cronTrigger);
//
// logger.info(">>>>>>>>>>> addJob success(quartz), jobDetail:{}, cronTrigger:{}, date:{}", jobDetail, cronTrigger, date);
// return true;
// }
//
//
// /**
// * remove trigger + job
// *
// * @param jobName
// * @return
// * @throws SchedulerException
// */
// public static boolean removeJob(String jobName) throws SchedulerException {
//
// JobKey jobKey = new JobKey(jobName);
// scheduler.deleteJob(jobKey);
//
// /*TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
// if (scheduler.checkExists(triggerKey)) {
// scheduler.unscheduleJob(triggerKey); // trigger + job
// }*/
//
// logger.info(">>>>>>>>>>> removeJob success(quartz), jobKey:{}", jobKey);
// return true;
// }
//
//
// /**
// * updateJobCron
// *
// * @param jobName
// * @param cronExpression
// * @return
// * @throws SchedulerException
// */
// public static boolean updateJobCron(String jobName, String cronExpression) throws SchedulerException {
//
// // 1job key
// TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
//
// // 2valid
// if (!scheduler.checkExists(triggerKey)) {
// return true; // PASS
// }
//
// CronTrigger oldTrigger = (CronTrigger) scheduler.getTrigger(triggerKey);
//
// // 3avoid repeat cron
// String oldCron = oldTrigger.getCronExpression();
// if (oldCron.equals(cronExpression)){
// return true; // PASS
// }
//
// // 4new cron trigger
// CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule(cronExpression).withMisfireHandlingInstructionDoNothing();
// oldTrigger = oldTrigger.getTriggerBuilder().withIdentity(triggerKey).withSchedule(cronScheduleBuilder).build();
//
// // 5rescheduleJob
// scheduler.rescheduleJob(triggerKey, oldTrigger);
//
// /*
// JobKey jobKey = new JobKey(jobName);
//
// // old job detail
// JobDetail jobDetail = scheduler.getJobDetail(jobKey);
//
// // new trigger
// HashSet<Trigger> triggerSet = new HashSet<Trigger>();
// triggerSet.add(cronTrigger);
// // cover trigger of job detail
// scheduler.scheduleJob(jobDetail, triggerSet, true);*/
//
// logger.info(">>>>>>>>>>> resumeJob success, JobName:{}", jobName);
// return true;
// }
//
//
// /**
// * pause
// *
// * @param jobName
// * @return
// * @throws SchedulerException
// */
// /*public static boolean pauseJob(String jobName) throws SchedulerException {
//
// TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
//
// boolean result = false;
// if (scheduler.checkExists(triggerKey)) {
// scheduler.pauseTrigger(triggerKey);
// result = true;
// }
//
// logger.info(">>>>>>>>>>> pauseJob {}, triggerKey:{}", (result?"success":"fail"),triggerKey);
// return result;
// }*/
//
//
// /**
// * resume
// *
// * @param jobName
// * @return
// * @throws SchedulerException
// */
// /*public static boolean resumeJob(String jobName) throws SchedulerException {
//
// TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
//
// boolean result = false;
// if (scheduler.checkExists(triggerKey)) {
// scheduler.resumeTrigger(triggerKey);
// result = true;
// }
//
// logger.info(">>>>>>>>>>> resumeJob {}, triggerKey:{}", (result?"success":"fail"), triggerKey);
// return result;
// }*/
//
//
// /**
// * run
// *
// * @param jobName
// * @return
// * @throws SchedulerException
// */
// /*public static boolean triggerJob(String jobName) throws SchedulerException {
// // TriggerKey : name + group
// JobKey jobKey = new JobKey(jobName);
// TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
//
// boolean result = false;
// if (scheduler.checkExists(triggerKey)) {
// scheduler.triggerJob(jobKey);
// result = true;
// logger.info(">>>>>>>>>>> runJob success, jobKey:{}", jobKey);
// } else {
// logger.info(">>>>>>>>>>> runJob fail, jobKey:{}", jobKey);
// }
// return result;
// }*/
//
//
// /**
// * finaAllJobList
// *
// * @return
// *//*
// @Deprecated
// public static List<Map<String, Object>> finaAllJobList(){
// List<Map<String, Object>> jobList = new ArrayList<Map<String,Object>>();
//
// try {
// if (scheduler.getJobGroupNames()==null || scheduler.getJobGroupNames().size()==0) {
// return null;
// }
// String groupName = scheduler.getJobGroupNames().get(0);
// Set<JobKey> jobKeys = scheduler.getJobKeys(GroupMatcher.jobGroupEquals(groupName));
// if (jobKeys!=null && jobKeys.size()>0) {
// for (JobKey jobKey : jobKeys) {
// TriggerKey triggerKey = TriggerKey.triggerKey(jobKey.getName(), Scheduler.DEFAULT_GROUP);
// Trigger trigger = scheduler.getTrigger(triggerKey);
// JobDetail jobDetail = scheduler.getJobDetail(jobKey);
// TriggerState triggerState = scheduler.getTriggerState(triggerKey);
// Map<String, Object> jobMap = new HashMap<String, Object>();
// jobMap.put("TriggerKey", triggerKey);
// jobMap.put("Trigger", trigger);
// jobMap.put("JobDetail", jobDetail);
// jobMap.put("TriggerState", triggerState);
// jobList.add(jobMap);
// }
// }
//
// } catch (SchedulerException e) {
// logger.error(e.getMessage(), e);
// return null;
// }
// return jobList;
// }*/
//
//}

View File

@ -0,0 +1,58 @@
//package com.xxl.job.admin.core.quartz;
//
//import org.quartz.SchedulerConfigException;
//import org.quartz.spi.ThreadPool;
//
///**
// * single thread pool, for async trigger
// *
// * @author xuxueli 2019-03-06
// */
//public class XxlJobThreadPool implements ThreadPool {
//
// @Override
// public boolean runInThread(Runnable runnable) {
//
// // async run
// runnable.run();
// return true;
//
// //return false;
// }
//
// @Override
// public int blockForAvailableThreads() {
// return 1;
// }
//
// @Override
// public void initialize() throws SchedulerConfigException {
//
// }
//
// @Override
// public void shutdown(boolean waitForJobsToComplete) {
//
// }
//
// @Override
// public int getPoolSize() {
// return 1;
// }
//
// @Override
// public void setInstanceId(String schedInstId) {
//
// }
//
// @Override
// public void setInstanceName(String schedName) {
//
// }
//
// // support
// public void setThreadCount(int count) {
// //
// }
//
//}

View File

@ -1,58 +0,0 @@
package com.xxl.job.admin.core.quartz;
import org.quartz.SchedulerConfigException;
import org.quartz.spi.ThreadPool;
/**
* single thread pool, for async trigger
*
* @author xuxueli 2019-03-06
*/
public class XxlJobThreadPool implements ThreadPool {
@Override
public boolean runInThread(Runnable runnable) {
// async run
runnable.run();
return true;
//return false;
}
@Override
public int blockForAvailableThreads() {
return 1;
}
@Override
public void initialize() throws SchedulerConfigException {
}
@Override
public void shutdown(boolean waitForJobsToComplete) {
}
@Override
public int getPoolSize() {
return 1;
}
@Override
public void setInstanceId(String schedInstId) {
}
@Override
public void setInstanceName(String schedName) {
}
// support
public void setThreadCount(int count) {
//
}
}

View File

@ -1,7 +1,7 @@
package com.xxl.job.admin.core.route.strategy;
import com.xxl.job.admin.core.conf.XxlJobScheduler;
import com.xxl.job.admin.core.route.ExecutorRouter;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.core.biz.ExecutorBiz;
import com.xxl.job.core.biz.model.ReturnT;
@ -21,7 +21,7 @@ public class ExecutorRouteBusyover extends ExecutorRouter {
// beat
ReturnT<String> idleBeatResult = null;
try {
ExecutorBiz executorBiz = XxlJobDynamicScheduler.getExecutorBiz(address);
ExecutorBiz executorBiz = XxlJobScheduler.getExecutorBiz(address);
idleBeatResult = executorBiz.idleBeat(triggerParam.getJobId());
} catch (Exception e) {
logger.error(e.getMessage(), e);

View File

@ -1,7 +1,7 @@
package com.xxl.job.admin.core.route.strategy;
import com.xxl.job.admin.core.conf.XxlJobScheduler;
import com.xxl.job.admin.core.route.ExecutorRouter;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.core.biz.ExecutorBiz;
import com.xxl.job.core.biz.model.ReturnT;
@ -22,7 +22,7 @@ public class ExecutorRouteFailover extends ExecutorRouter {
// beat
ReturnT<String> beatResult = null;
try {
ExecutorBiz executorBiz = XxlJobDynamicScheduler.getExecutorBiz(address);
ExecutorBiz executorBiz = XxlJobScheduler.getExecutorBiz(address);
beatResult = executorBiz.beat();
} catch (Exception e) {
logger.error(e.getMessage(), e);

View File

@ -1,413 +0,0 @@
package com.xxl.job.admin.core.schedule;
import com.xxl.job.admin.core.conf.XxlJobAdminConfig;
import com.xxl.job.admin.core.jobbean.RemoteHttpJobBean;
import com.xxl.job.admin.core.model.XxlJobInfo;
import com.xxl.job.admin.core.thread.JobFailMonitorHelper;
import com.xxl.job.admin.core.thread.JobRegistryMonitorHelper;
import com.xxl.job.admin.core.thread.JobTriggerPoolHelper;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.core.biz.AdminBiz;
import com.xxl.job.core.biz.ExecutorBiz;
import com.xxl.job.core.enums.ExecutorBlockStrategyEnum;
import com.xxl.rpc.remoting.invoker.XxlRpcInvokerFactory;
import com.xxl.rpc.remoting.invoker.call.CallType;
import com.xxl.rpc.remoting.invoker.reference.XxlRpcReferenceBean;
import com.xxl.rpc.remoting.invoker.route.LoadBalance;
import com.xxl.rpc.remoting.net.NetEnum;
import com.xxl.rpc.remoting.net.impl.servlet.server.ServletServerHandler;
import com.xxl.rpc.remoting.provider.XxlRpcProviderFactory;
import com.xxl.rpc.serialize.Serializer;
import org.quartz.*;
import org.quartz.Trigger.TriggerState;
import org.quartz.impl.triggers.CronTriggerImpl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.util.Assert;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.util.Date;
import java.util.concurrent.ConcurrentHashMap;
/**
* base quartz scheduler util
* @author xuxueli 2015-12-19 16:13:53
*/
public final class XxlJobDynamicScheduler {
private static final Logger logger = LoggerFactory.getLogger(XxlJobDynamicScheduler.class);
// ---------------------- param ----------------------
// scheduler
private static Scheduler scheduler;
public void setScheduler(Scheduler scheduler) {
XxlJobDynamicScheduler.scheduler = scheduler;
}
// ---------------------- init + destroy ----------------------
public void start() throws Exception {
// valid
Assert.notNull(scheduler, "quartz scheduler is null");
// init i18n
initI18n();
// admin registry monitor run
JobRegistryMonitorHelper.getInstance().start();
// admin monitor run
JobFailMonitorHelper.getInstance().start();
// admin-server
initRpcProvider();
logger.info(">>>>>>>>> init xxl-job admin success.");
}
public void destroy() throws Exception {
// admin trigger pool stop
JobTriggerPoolHelper.toStop();
// admin registry stop
JobRegistryMonitorHelper.getInstance().toStop();
// admin monitor stop
JobFailMonitorHelper.getInstance().toStop();
// admin-server
stopRpcProvider();
}
// ---------------------- I18n ----------------------
private void initI18n(){
for (ExecutorBlockStrategyEnum item:ExecutorBlockStrategyEnum.values()) {
item.setTitle(I18nUtil.getString("jobconf_block_".concat(item.name())));
}
}
// ---------------------- admin rpc provider (no server version) ----------------------
private static ServletServerHandler servletServerHandler;
private void initRpcProvider(){
// init
XxlRpcProviderFactory xxlRpcProviderFactory = new XxlRpcProviderFactory();
xxlRpcProviderFactory.initConfig(
NetEnum.NETTY_HTTP,
Serializer.SerializeEnum.HESSIAN.getSerializer(),
null,
0,
XxlJobAdminConfig.getAdminConfig().getAccessToken(),
null,
null);
// add services
xxlRpcProviderFactory.addService(AdminBiz.class.getName(), null, XxlJobAdminConfig.getAdminConfig().getAdminBiz());
// servlet handler
servletServerHandler = new ServletServerHandler(xxlRpcProviderFactory);
}
private void stopRpcProvider() throws Exception {
XxlRpcInvokerFactory.getInstance().stop();
}
public static void invokeAdminService(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
servletServerHandler.handle(null, request, response);
}
// ---------------------- executor-client ----------------------
private static ConcurrentHashMap<String, ExecutorBiz> executorBizRepository = new ConcurrentHashMap<String, ExecutorBiz>();
public static ExecutorBiz getExecutorBiz(String address) throws Exception {
// valid
if (address==null || address.trim().length()==0) {
return null;
}
// load-cache
address = address.trim();
ExecutorBiz executorBiz = executorBizRepository.get(address);
if (executorBiz != null) {
return executorBiz;
}
// set-cache
executorBiz = (ExecutorBiz) new XxlRpcReferenceBean(
NetEnum.NETTY_HTTP,
Serializer.SerializeEnum.HESSIAN.getSerializer(),
CallType.SYNC,
LoadBalance.ROUND,
ExecutorBiz.class,
null,
5000,
address,
XxlJobAdminConfig.getAdminConfig().getAccessToken(),
null,
null).getObject();
executorBizRepository.put(address, executorBiz);
return executorBiz;
}
// ---------------------- schedule util ----------------------
/**
* fill job info
*
* @param jobInfo
*/
public static void fillJobInfo(XxlJobInfo jobInfo) {
String name = String.valueOf(jobInfo.getId());
// trigger key
TriggerKey triggerKey = TriggerKey.triggerKey(name);
try {
// trigger cron
Trigger trigger = scheduler.getTrigger(triggerKey);
if (trigger!=null && trigger instanceof CronTriggerImpl) {
String cronExpression = ((CronTriggerImpl) trigger).getCronExpression();
jobInfo.setJobCron(cronExpression);
}
// trigger state
TriggerState triggerState = scheduler.getTriggerState(triggerKey);
if (triggerState!=null) {
jobInfo.setJobStatus(triggerState.name());
}
//JobKey jobKey = new JobKey(jobInfo.getJobName(), String.valueOf(jobInfo.getJobGroup()));
//JobDetail jobDetail = scheduler.getJobDetail(jobKey);
//String jobClass = jobDetail.getJobClass().getName();
} catch (SchedulerException e) {
logger.error(e.getMessage(), e);
}
}
/**
* add trigger + job
*
* @param jobName
* @param cronExpression
* @return
* @throws SchedulerException
*/
public static boolean addJob(String jobName, String cronExpression) throws SchedulerException {
// 1job key
TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
JobKey jobKey = new JobKey(jobName);
// 2valid
if (scheduler.checkExists(triggerKey)) {
return true; // PASS
}
// 3corn trigger
CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule(cronExpression).withMisfireHandlingInstructionDoNothing(); // withMisfireHandlingInstructionDoNothing 忽略掉调度终止过程中忽略的调度
CronTrigger cronTrigger = TriggerBuilder.newTrigger().withIdentity(triggerKey).withSchedule(cronScheduleBuilder).build();
// 4job detail
Class<? extends Job> jobClass_ = RemoteHttpJobBean.class; // Class.forName(jobInfo.getJobClass());
JobDetail jobDetail = JobBuilder.newJob(jobClass_).withIdentity(jobKey).build();
/*if (jobInfo.getJobData()!=null) {
JobDataMap jobDataMap = jobDetail.getJobDataMap();
jobDataMap.putAll(JacksonUtil.readValue(jobInfo.getJobData(), Map.class));
// JobExecutionContext context.getMergedJobDataMap().get("mailGuid");
}*/
// 5schedule job
Date date = scheduler.scheduleJob(jobDetail, cronTrigger);
logger.info(">>>>>>>>>>> addJob success(quartz), jobDetail:{}, cronTrigger:{}, date:{}", jobDetail, cronTrigger, date);
return true;
}
/**
* remove trigger + job
*
* @param jobName
* @return
* @throws SchedulerException
*/
public static boolean removeJob(String jobName) throws SchedulerException {
JobKey jobKey = new JobKey(jobName);
scheduler.deleteJob(jobKey);
/*TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
if (scheduler.checkExists(triggerKey)) {
scheduler.unscheduleJob(triggerKey); // trigger + job
}*/
logger.info(">>>>>>>>>>> removeJob success(quartz), jobKey:{}", jobKey);
return true;
}
/**
* updateJobCron
*
* @param jobName
* @param cronExpression
* @return
* @throws SchedulerException
*/
public static boolean updateJobCron(String jobName, String cronExpression) throws SchedulerException {
// 1job key
TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
// 2valid
if (!scheduler.checkExists(triggerKey)) {
return true; // PASS
}
CronTrigger oldTrigger = (CronTrigger) scheduler.getTrigger(triggerKey);
// 3avoid repeat cron
String oldCron = oldTrigger.getCronExpression();
if (oldCron.equals(cronExpression)){
return true; // PASS
}
// 4new cron trigger
CronScheduleBuilder cronScheduleBuilder = CronScheduleBuilder.cronSchedule(cronExpression).withMisfireHandlingInstructionDoNothing();
oldTrigger = oldTrigger.getTriggerBuilder().withIdentity(triggerKey).withSchedule(cronScheduleBuilder).build();
// 5rescheduleJob
scheduler.rescheduleJob(triggerKey, oldTrigger);
/*
JobKey jobKey = new JobKey(jobName);
// old job detail
JobDetail jobDetail = scheduler.getJobDetail(jobKey);
// new trigger
HashSet<Trigger> triggerSet = new HashSet<Trigger>();
triggerSet.add(cronTrigger);
// cover trigger of job detail
scheduler.scheduleJob(jobDetail, triggerSet, true);*/
logger.info(">>>>>>>>>>> resumeJob success, JobName:{}", jobName);
return true;
}
/**
* pause
*
* @param jobName
* @return
* @throws SchedulerException
*/
/*public static boolean pauseJob(String jobName) throws SchedulerException {
TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
boolean result = false;
if (scheduler.checkExists(triggerKey)) {
scheduler.pauseTrigger(triggerKey);
result = true;
}
logger.info(">>>>>>>>>>> pauseJob {}, triggerKey:{}", (result?"success":"fail"),triggerKey);
return result;
}*/
/**
* resume
*
* @param jobName
* @return
* @throws SchedulerException
*/
/*public static boolean resumeJob(String jobName) throws SchedulerException {
TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
boolean result = false;
if (scheduler.checkExists(triggerKey)) {
scheduler.resumeTrigger(triggerKey);
result = true;
}
logger.info(">>>>>>>>>>> resumeJob {}, triggerKey:{}", (result?"success":"fail"), triggerKey);
return result;
}*/
/**
* run
*
* @param jobName
* @return
* @throws SchedulerException
*/
/*public static boolean triggerJob(String jobName) throws SchedulerException {
// TriggerKey : name + group
JobKey jobKey = new JobKey(jobName);
TriggerKey triggerKey = TriggerKey.triggerKey(jobName);
boolean result = false;
if (scheduler.checkExists(triggerKey)) {
scheduler.triggerJob(jobKey);
result = true;
logger.info(">>>>>>>>>>> runJob success, jobKey:{}", jobKey);
} else {
logger.info(">>>>>>>>>>> runJob fail, jobKey:{}", jobKey);
}
return result;
}*/
/**
* finaAllJobList
*
* @return
*//*
@Deprecated
public static List<Map<String, Object>> finaAllJobList(){
List<Map<String, Object>> jobList = new ArrayList<Map<String,Object>>();
try {
if (scheduler.getJobGroupNames()==null || scheduler.getJobGroupNames().size()==0) {
return null;
}
String groupName = scheduler.getJobGroupNames().get(0);
Set<JobKey> jobKeys = scheduler.getJobKeys(GroupMatcher.jobGroupEquals(groupName));
if (jobKeys!=null && jobKeys.size()>0) {
for (JobKey jobKey : jobKeys) {
TriggerKey triggerKey = TriggerKey.triggerKey(jobKey.getName(), Scheduler.DEFAULT_GROUP);
Trigger trigger = scheduler.getTrigger(triggerKey);
JobDetail jobDetail = scheduler.getJobDetail(jobKey);
TriggerState triggerState = scheduler.getTriggerState(triggerKey);
Map<String, Object> jobMap = new HashMap<String, Object>();
jobMap.put("TriggerKey", triggerKey);
jobMap.put("Trigger", trigger);
jobMap.put("JobDetail", jobDetail);
jobMap.put("TriggerState", triggerState);
jobList.add(jobMap);
}
}
} catch (SchedulerException e) {
logger.error(e.getMessage(), e);
return null;
}
return jobList;
}*/
}

View File

@ -0,0 +1,218 @@
package com.xxl.job.admin.core.thread;
import com.xxl.job.admin.core.conf.XxlJobAdminConfig;
import com.xxl.job.admin.core.model.XxlJobInfo;
import com.xxl.job.admin.core.cron.CronExpression;
import com.xxl.job.admin.core.trigger.TriggerTypeEnum;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
/**
* @author xuxueli 2019-05-21
*/
public class JobScheduleHelper {
private static Logger logger = LoggerFactory.getLogger(JobScheduleHelper.class);
private static JobScheduleHelper instance = new JobScheduleHelper();
public static JobScheduleHelper getInstance(){
return instance;
}
private Thread scheduleThread;
private Thread ringThread;
private volatile boolean toStop = false;
private volatile static Map<Integer, List<Integer>> ringData = new ConcurrentHashMap<>();
public void start(){
// schedule thread
scheduleThread = new Thread(new Runnable() {
@Override
public void run() {
while (!toStop) {
// 随机休眠1s内
try {
TimeUnit.MILLISECONDS.sleep(500+new Random().nextInt(500));
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
// 匹配任务
Connection conn = null;
PreparedStatement preparedStatement = null;
try {
if (conn==null || conn.isClosed()) {
conn = XxlJobAdminConfig.getAdminConfig().getDataSource().getConnection();
}
conn.setAutoCommit(false);
preparedStatement = conn.prepareStatement( "select * from XXL_JOB_LOCK where lock_name = 'schedule_lock' for update" );
preparedStatement.execute();
// tx start
// 1查询JOB"下次调度30s内" ( ...... -5 ... now ...... +30 ...... )
long maxNextTime = System.currentTimeMillis() + 30000;
long nowTime = System.currentTimeMillis();
List<XxlJobInfo> scheduleList = XxlJobAdminConfig.getAdminConfig().getXxlJobInfoDao().scheduleJobQuery(maxNextTime);
if (scheduleList!=null && scheduleList.size()>0) {
// 2推送时间轮
for (XxlJobInfo jobInfo: scheduleList) {
// 过期策略过期=更新下次触发时间为当前过期是否5s内=立即出发否则忽略
if (jobInfo.getTriggerNextTime() < nowTime) {
jobInfo.setTriggerNextTime(nowTime);
if (jobInfo.getTriggerNextTime() < nowTime-10000) {
continue;
}
}
// push async ring
int second = (int)((jobInfo.getTriggerNextTime()/1000)%60);
List<Integer> ringItemData = ringData.get(second);
if (ringItemData == null) {
ringItemData = new ArrayList<Integer>();
ringData.put(second, ringItemData);
}
ringItemData.add(jobInfo.getId());
logger.info(">>>>>>>>>>> xxl-job, push time-ring : " + second + " = " + Arrays.asList(ringItemData) );
}
// 3更新trigger信息
for (XxlJobInfo jobInfo: scheduleList) {
// update
jobInfo.setTriggerLastTime(jobInfo.getTriggerNextTime());
jobInfo.setTriggerNextTime(
new CronExpression(jobInfo.getJobCron())
.getNextValidTimeAfter(new Date(jobInfo.getTriggerNextTime()))
.getTime()
);
XxlJobAdminConfig.getAdminConfig().getXxlJobInfoDao().scheduleUpdate(jobInfo);
}
}
// tx stop
conn.commit();
} catch (Exception e) {
if (!toStop) {
logger.error(">>>>>>>>>>> xxl-job, JobScheduleHelper#scheduleThread error:{}", e);
}
} finally {
if (conn != null) {
try {
conn.close();
} catch (SQLException e) {
}
}
if (null != preparedStatement) {
try {
preparedStatement.close();
} catch (SQLException ignore) {
}
}
}
}
logger.info(">>>>>>>>>>> xxl-job, JobScheduleHelper#scheduleThread stop");
}
});
scheduleThread.setDaemon(true);
scheduleThread.setName("xxl-job, admin JobScheduleHelper#scheduleThread");
scheduleThread.start();
// ring thread
ringThread = new Thread(new Runnable() {
@Override
public void run() {
int lastSecond = -1;
while (!toStop) {
try {
// second data
List<Integer> ringItemData = new ArrayList<>();
int nowSecond = (int)((System.currentTimeMillis()/1000)%60); // 避免处理耗时太长跨过刻度
if (lastSecond == -1) {
if (ringData.containsKey(nowSecond)) {
List<Integer> tmpData = ringData.remove(nowSecond);
if (tmpData != null) {
ringItemData.addAll(tmpData);
}
}
lastSecond = nowSecond;
} else {
for (int i = 1; i <=60; i++) {
int secondItem = (lastSecond+i)%60;
List<Integer> tmpData = ringData.remove(secondItem);
if (tmpData != null) {
ringItemData.addAll(tmpData);
}
if (secondItem == nowSecond) {
break;
}
}
lastSecond = nowSecond;
}
logger.info(">>>>>>>>>>> xxl-job, time-ring beat : " + nowSecond + " = " + Arrays.asList(ringItemData) );
if (ringItemData!=null && ringItemData.size()>0) {
// do trigger
for (int jobId: ringItemData) {
// do trigger
JobTriggerPoolHelper.trigger(jobId, TriggerTypeEnum.CRON, -1, null, null);
}
// clear
ringItemData.clear();
}
} catch (Exception e) {
if (!toStop) {
logger.error(">>>>>>>>>>> xxl-job, JobScheduleHelper#ringThread error:{}", e);
}
}
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
}
logger.info(">>>>>>>>>>> xxl-job, JobScheduleHelper#ringThread stop");
}
});
ringThread.setDaemon(true);
ringThread.setName("xxl-job, admin JobScheduleHelper#ringThread");
ringThread.start();
}
public void toStop(){
toStop = true;
// interrupt and wait
scheduleThread.interrupt();
try {
scheduleThread.join();
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
// interrupt and wait
ringThread.interrupt();
try {
ringThread.join();
} catch (InterruptedException e) {
logger.error(e.getMessage(), e);
}
}
}

View File

@ -1,11 +1,11 @@
package com.xxl.job.admin.core.trigger;
import com.xxl.job.admin.core.conf.XxlJobAdminConfig;
import com.xxl.job.admin.core.conf.XxlJobScheduler;
import com.xxl.job.admin.core.model.XxlJobGroup;
import com.xxl.job.admin.core.model.XxlJobInfo;
import com.xxl.job.admin.core.model.XxlJobLog;
import com.xxl.job.admin.core.route.ExecutorRouteStrategyEnum;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.core.biz.ExecutorBiz;
import com.xxl.job.core.biz.model.ReturnT;
@ -192,7 +192,7 @@ public class XxlJobTrigger {
public static ReturnT<String> runExecutor(TriggerParam triggerParam, String address){
ReturnT<String> runResult = null;
try {
ExecutorBiz executorBiz = XxlJobDynamicScheduler.getExecutorBiz(address);
ExecutorBiz executorBiz = XxlJobScheduler.getExecutorBiz(address);
runResult = executorBiz.run(triggerParam);
} catch (Exception e) {
logger.error(">>>>>>>>>>> xxl-job trigger error, please check if the executor[{}] is running.", address, e);

View File

@ -29,7 +29,7 @@ public interface XxlJobInfoDao {
public XxlJobInfo loadById(@Param("id") int id);
public int update(XxlJobInfo item);
public int update(XxlJobInfo xxlJobInfo);
public int delete(@Param("id") int id);
@ -37,4 +37,9 @@ public interface XxlJobInfoDao {
public int findAllCount();
public List<XxlJobInfo> scheduleJobQuery(@Param("maxNextTime") long maxNextTime);
public int scheduleUpdate(XxlJobInfo xxlJobInfo);
}

View File

@ -28,7 +28,7 @@ public interface XxlJobService {
public Map<String, Object> pageList(int start, int length, int jobGroup, String jobDesc, String executorHandler, String filterTime);
/**
* add job, default quartz stop
* add job
*
* @param jobInfo
* @return
@ -36,7 +36,7 @@ public interface XxlJobService {
public ReturnT<String> add(XxlJobInfo jobInfo);
/**
* update job, update quartz-cron if started
* update job
*
* @param jobInfo
* @return
@ -44,15 +44,15 @@ public interface XxlJobService {
public ReturnT<String> update(XxlJobInfo jobInfo);
/**
* remove job, unbind quartz
*
* remove job
* *
* @param id
* @return
*/
public ReturnT<String> remove(int id);
/**
* start job, bind quartz
* start job
*
* @param id
* @return
@ -60,7 +60,7 @@ public interface XxlJobService {
public ReturnT<String> start(int id);
/**
* stop job, unbind quartz
* stop job
*
* @param id
* @return

View File

@ -2,8 +2,8 @@ package com.xxl.job.admin.service.impl;
import com.xxl.job.admin.core.model.XxlJobGroup;
import com.xxl.job.admin.core.model.XxlJobInfo;
import com.xxl.job.admin.core.cron.CronExpression;
import com.xxl.job.admin.core.route.ExecutorRouteStrategyEnum;
import com.xxl.job.admin.core.schedule.XxlJobDynamicScheduler;
import com.xxl.job.admin.core.util.I18nUtil;
import com.xxl.job.admin.dao.XxlJobGroupDao;
import com.xxl.job.admin.dao.XxlJobInfoDao;
@ -14,14 +14,13 @@ import com.xxl.job.core.biz.model.ReturnT;
import com.xxl.job.core.enums.ExecutorBlockStrategyEnum;
import com.xxl.job.core.glue.GlueTypeEnum;
import com.xxl.job.core.util.DateUtil;
import org.quartz.CronExpression;
import org.quartz.SchedulerException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import javax.annotation.Resource;
import java.text.MessageFormat;
import java.text.ParseException;
import java.util.*;
/**
@ -48,13 +47,6 @@ public class XxlJobServiceImpl implements XxlJobService {
List<XxlJobInfo> list = xxlJobInfoDao.pageList(start, length, jobGroup, jobDesc, executorHandler);
int list_count = xxlJobInfoDao.pageListCount(start, length, jobGroup, jobDesc, executorHandler);
// fill job info
if (list!=null && list.size()>0) {
for (XxlJobInfo jobInfo : list) {
XxlJobDynamicScheduler.fillJobInfo(jobInfo);
}
}
// package result
Map<String, Object> maps = new HashMap<String, Object>();
maps.put("recordsTotal", list_count); // 总记录数
@ -197,6 +189,15 @@ public class XxlJobServiceImpl implements XxlJobService {
return new ReturnT<String>(ReturnT.FAIL_CODE, (I18nUtil.getString("jobinfo_field_id")+I18nUtil.getString("system_not_found")) );
}
// next trigger time
long nextTriggerTime = 0;
try {
nextTriggerTime = new CronExpression(jobInfo.getJobCron()).getNextValidTimeAfter(new Date()).getTime();
} catch (ParseException e) {
logger.error(e.getMessage(), e);
return new ReturnT<String>(ReturnT.FAIL_CODE, I18nUtil.getString("jobinfo_field_cron_unvalid")+" | "+ e.getMessage());
}
exists_jobInfo.setJobGroup(jobInfo.getJobGroup());
exists_jobInfo.setJobCron(jobInfo.getJobCron());
exists_jobInfo.setJobDesc(jobInfo.getJobDesc());
@ -209,18 +210,10 @@ public class XxlJobServiceImpl implements XxlJobService {
exists_jobInfo.setExecutorTimeout(jobInfo.getExecutorTimeout());
exists_jobInfo.setExecutorFailRetryCount(jobInfo.getExecutorFailRetryCount());
exists_jobInfo.setChildJobId(jobInfo.getChildJobId());
exists_jobInfo.setTriggerNextTime(nextTriggerTime);
xxlJobInfoDao.update(exists_jobInfo);
// update quartz-cron if started
try {
String qz_name = String.valueOf(exists_jobInfo.getId());
XxlJobDynamicScheduler.updateJobCron(qz_name, exists_jobInfo.getJobCron());
} catch (SchedulerException e) {
logger.error(e.getMessage(), e);
return ReturnT.FAIL;
}
return ReturnT.SUCCESS;
}
@ -230,77 +223,55 @@ public class XxlJobServiceImpl implements XxlJobService {
if (xxlJobInfo == null) {
return ReturnT.SUCCESS;
}
String name = String.valueOf(xxlJobInfo.getId());
try {
// unbind quartz
XxlJobDynamicScheduler.removeJob(name);
xxlJobInfoDao.delete(id);
xxlJobLogDao.delete(id);
xxlJobLogGlueDao.deleteByJobId(id);
return ReturnT.SUCCESS;
} catch (SchedulerException e) {
logger.error(e.getMessage(), e);
return ReturnT.FAIL;
}
}
@Override
public ReturnT<String> start(int id) {
XxlJobInfo xxlJobInfo = xxlJobInfoDao.loadById(id);
String name = String.valueOf(xxlJobInfo.getId());
String cronExpression = xxlJobInfo.getJobCron();
// next trigger time
long nextTriggerTime = 0;
try {
boolean ret = XxlJobDynamicScheduler.addJob(name, cronExpression);
return ret?ReturnT.SUCCESS:ReturnT.FAIL;
} catch (SchedulerException e) {
nextTriggerTime = new CronExpression(xxlJobInfo.getJobCron()).getNextValidTimeAfter(new Date()).getTime();
} catch (ParseException e) {
logger.error(e.getMessage(), e);
return ReturnT.FAIL;
return new ReturnT<String>(ReturnT.FAIL_CODE, I18nUtil.getString("jobinfo_field_cron_unvalid")+" | "+ e.getMessage());
}
xxlJobInfo.setTriggerStatus(1);
xxlJobInfo.setTriggerLastTime(0);
xxlJobInfo.setTriggerNextTime(nextTriggerTime);
xxlJobInfoDao.update(xxlJobInfo);
return ReturnT.SUCCESS;
}
@Override
public ReturnT<String> stop(int id) {
XxlJobInfo xxlJobInfo = xxlJobInfoDao.loadById(id);
String name = String.valueOf(xxlJobInfo.getId());
// next trigger time
long nextTriggerTime = 0;
try {
// bind quartz
boolean ret = XxlJobDynamicScheduler.removeJob(name);
return ret?ReturnT.SUCCESS:ReturnT.FAIL;
} catch (SchedulerException e) {
nextTriggerTime = new CronExpression(xxlJobInfo.getJobCron()).getNextValidTimeAfter(new Date()).getTime();
} catch (ParseException e) {
logger.error(e.getMessage(), e);
return ReturnT.FAIL;
}
return new ReturnT<String>(ReturnT.FAIL_CODE, I18nUtil.getString("jobinfo_field_cron_unvalid")+" | "+ e.getMessage());
}
/*@Override
public ReturnT<String> triggerJob(int id, int failRetryCount) {
xxlJobInfo.setTriggerStatus(0);
xxlJobInfo.setTriggerLastTime(0);
xxlJobInfo.setTriggerNextTime(nextTriggerTime);
JobTriggerPoolHelper.trigger(id, failRetryCount);
xxlJobInfoDao.update(xxlJobInfo);
return ReturnT.SUCCESS;
*//*XxlJobInfo xxlJobInfo = xxlJobInfoDao.loadById(id);
if (xxlJobInfo == null) {
return new ReturnT<String>(ReturnT.FAIL_CODE, (I18nUtil.getString("jobinfo_field_id")+I18nUtil.getString("system_unvalid")) );
}
String group = String.valueOf(xxlJobInfo.getJobGroup());
String name = String.valueOf(xxlJobInfo.getId());
try {
XxlJobDynamicScheduler.triggerJob(name, group);
return ReturnT.SUCCESS;
} catch (SchedulerException e) {
logger.error(e.getMessage(), e);
return new ReturnT<String>(ReturnT.FAIL_CODE, e.getMessage());
}*//*
}*/
@Override
public Map<String, Object> dashboardInfo() {

View File

@ -23,24 +23,24 @@
<select id="findAll" resultMap="XxlJobGroup">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_GROUP AS t
FROM XXL_JOB_GROUP AS t
ORDER BY t.order ASC
</select>
<select id="findByAddressType" parameterType="java.lang.Integer" resultMap="XxlJobGroup">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_GROUP AS t
FROM XXL_JOB_GROUP AS t
WHERE t.address_type = #{addressType}
ORDER BY t.order ASC
</select>
<insert id="save" parameterType="com.xxl.job.admin.core.model.XxlJobGroup" useGeneratedKeys="true" keyProperty="id" >
INSERT INTO XXL_JOB_QRTZ_TRIGGER_GROUP ( `app_name`, `title`, `order`, `address_type`, `address_list`)
INSERT INTO XXL_JOB_GROUP ( `app_name`, `title`, `order`, `address_type`, `address_list`)
values ( #{appName}, #{title}, #{order}, #{addressType}, #{addressList});
</insert>
<update id="update" parameterType="com.xxl.job.admin.core.model.XxlJobGroup" >
UPDATE XXL_JOB_QRTZ_TRIGGER_GROUP
UPDATE XXL_JOB_GROUP
SET `app_name` = #{appName},
`title` = #{title},
`order` = #{order},
@ -50,13 +50,13 @@
</update>
<delete id="remove" parameterType="java.lang.Integer" >
DELETE FROM XXL_JOB_QRTZ_TRIGGER_GROUP
DELETE FROM XXL_JOB_GROUP
WHERE id = #{id}
</delete>
<select id="load" parameterType="java.lang.Integer" resultMap="XxlJobGroup">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_GROUP AS t
FROM XXL_JOB_GROUP AS t
WHERE t.id = #{id}
</select>

View File

@ -29,6 +29,10 @@
<result column="glue_updatetime" property="glueUpdatetime" />
<result column="child_jobid" property="childJobId" />
<result column="trigger_status" property="triggerStatus" />
<result column="trigger_last_time" property="triggerLastTime" />
<result column="trigger_next_time" property="triggerNextTime" />
</resultMap>
<sql id="Base_Column_List">
@ -50,12 +54,15 @@
t.glue_source,
t.glue_remark,
t.glue_updatetime,
t.child_jobid
t.child_jobid,
t.trigger_status,
t.trigger_last_time,
t.trigger_next_time
</sql>
<select id="pageList" parameterType="java.util.HashMap" resultMap="XxlJobInfo">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_INFO AS t
FROM XXL_JOB_INFO AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="jobGroup gt 0">
AND t.job_group = #{jobGroup}
@ -73,7 +80,7 @@
<select id="pageListCount" parameterType="java.util.HashMap" resultType="int">
SELECT count(1)
FROM XXL_JOB_QRTZ_TRIGGER_INFO AS t
FROM XXL_JOB_INFO AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="jobGroup gt 0">
AND t.job_group = #{jobGroup}
@ -88,7 +95,7 @@
</select>
<insert id="save" parameterType="com.xxl.job.admin.core.model.XxlJobInfo" useGeneratedKeys="true" keyProperty="id" >
INSERT INTO XXL_JOB_QRTZ_TRIGGER_INFO (
INSERT INTO XXL_JOB_INFO (
job_group,
job_cron,
job_desc,
@ -106,7 +113,10 @@
glue_source,
glue_remark,
glue_updatetime,
child_jobid
child_jobid,
trigger_status,
trigger_last_time,
trigger_next_time
) VALUES (
#{jobGroup},
#{jobCron},
@ -125,7 +135,10 @@
#{glueSource},
#{glueRemark},
NOW(),
#{childJobId}
#{childJobId},
#{triggerStatus},
#{triggerLastTime},
#{triggerNextTime}
);
<!--<selectKey resultType="java.lang.Integer" order="AFTER" keyProperty="id">
SELECT LAST_INSERT_ID()
@ -135,12 +148,12 @@
<select id="loadById" parameterType="java.util.HashMap" resultMap="XxlJobInfo">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_INFO AS t
FROM XXL_JOB_INFO AS t
WHERE t.id = #{id}
</select>
<update id="update" parameterType="com.xxl.job.admin.core.model.XxlJobInfo" >
UPDATE XXL_JOB_QRTZ_TRIGGER_INFO
UPDATE XXL_JOB_INFO
SET
job_group = #{jobGroup},
job_cron = #{jobCron},
@ -158,25 +171,44 @@
glue_source = #{glueSource},
glue_remark = #{glueRemark},
glue_updatetime = #{glueUpdatetime},
child_jobid = #{childJobId}
child_jobid = #{childJobId},
trigger_status = #{triggerStatus},
trigger_last_time = #{triggerLastTime},
trigger_next_time = #{triggerNextTime}
WHERE id = #{id}
</update>
<delete id="delete" parameterType="java.util.HashMap">
DELETE
FROM XXL_JOB_QRTZ_TRIGGER_INFO
FROM XXL_JOB_INFO
WHERE id = #{id}
</delete>
<select id="getJobsByGroup" parameterType="java.util.HashMap" resultMap="XxlJobInfo">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_INFO AS t
FROM XXL_JOB_INFO AS t
WHERE t.job_group = #{jobGroup}
</select>
<select id="findAllCount" resultType="int">
SELECT count(1)
FROM XXL_JOB_QRTZ_TRIGGER_INFO
FROM XXL_JOB_INFO
</select>
<select id="scheduleJobQuery" parameterType="java.util.HashMap" resultMap="XxlJobInfo">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_INFO AS t
WHERE t.trigger_status = 1
and t.trigger_next_time<![CDATA[ < ]]> #{maxNextTime}
</select>
<update id="scheduleUpdate" parameterType="com.xxl.job.admin.core.model.XxlJobInfo" >
UPDATE XXL_JOB_INFO
SET
trigger_last_time = #{triggerLastTime},
trigger_next_time = #{triggerNextTime}
WHERE id = #{id}
</update>
</mapper>

View File

@ -24,7 +24,7 @@
</sql>
<insert id="save" parameterType="com.xxl.job.admin.core.model.XxlJobLogGlue" useGeneratedKeys="true" keyProperty="id" >
INSERT INTO XXL_JOB_QRTZ_TRIGGER_LOGGLUE (
INSERT INTO XXL_JOB_LOGGLUE (
`job_id`,
`glue_type`,
`glue_source`,
@ -46,16 +46,16 @@
<select id="findByJobId" parameterType="java.lang.Integer" resultMap="XxlJobLogGlue">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_LOGGLUE AS t
FROM XXL_JOB_LOGGLUE AS t
WHERE t.job_id = #{jobId}
ORDER BY id DESC
</select>
<delete id="removeOld" >
DELETE FROM XXL_JOB_QRTZ_TRIGGER_LOGGLUE
DELETE FROM XXL_JOB_LOGGLUE
WHERE id NOT in(
SELECT id FROM(
SELECT id FROM XXL_JOB_QRTZ_TRIGGER_LOGGLUE
SELECT id FROM XXL_JOB_LOGGLUE
WHERE `job_id` = #{jobId}
ORDER BY update_time desc
LIMIT 0, #{limit}
@ -64,7 +64,7 @@
</delete>
<delete id="deleteByJobId" parameterType="java.lang.Integer" >
DELETE FROM XXL_JOB_QRTZ_TRIGGER_LOGGLUE
DELETE FROM XXL_JOB_LOGGLUE
WHERE `job_id` = #{jobId}
</delete>

View File

@ -46,7 +46,7 @@
<select id="pageList" resultMap="XxlJobLog">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_LOG AS t
FROM XXL_JOB_LOG AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="jobId==0 and jobGroup gt 0">
AND t.job_group = #{jobGroup}
@ -80,7 +80,7 @@
<select id="pageListCount" resultType="int">
SELECT count(1)
FROM XXL_JOB_QRTZ_TRIGGER_LOG AS t
FROM XXL_JOB_LOG AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="jobId==0 and jobGroup gt 0">
AND t.job_group = #{jobGroup}
@ -112,13 +112,13 @@
<select id="load" parameterType="java.lang.Integer" resultMap="XxlJobLog">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_LOG AS t
FROM XXL_JOB_LOG AS t
WHERE t.id = #{id}
</select>
<insert id="save" parameterType="com.xxl.job.admin.core.model.XxlJobLog" useGeneratedKeys="true" keyProperty="id" >
INSERT INTO XXL_JOB_QRTZ_TRIGGER_LOG (
INSERT INTO XXL_JOB_LOG (
`job_group`,
`job_id`,
`trigger_time`,
@ -137,7 +137,7 @@
</insert>
<update id="updateTriggerInfo" >
UPDATE XXL_JOB_QRTZ_TRIGGER_LOG
UPDATE XXL_JOB_LOG
SET
`trigger_time`= #{triggerTime},
`trigger_code`= #{triggerCode},
@ -151,7 +151,7 @@
</update>
<update id="updateHandleInfo">
UPDATE XXL_JOB_QRTZ_TRIGGER_LOG
UPDATE XXL_JOB_LOG
SET
`handle_time`= #{handleTime},
`handle_code`= #{handleCode},
@ -160,13 +160,13 @@
</update>
<delete id="delete" >
delete from XXL_JOB_QRTZ_TRIGGER_LOG
delete from XXL_JOB_LOG
WHERE job_id = #{jobId}
</delete>
<select id="triggerCountByHandleCode" resultType="int" >
SELECT count(1)
FROM XXL_JOB_QRTZ_TRIGGER_LOG AS t
FROM XXL_JOB_LOG AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="handleCode gt 0">
AND t.handle_code = #{handleCode}
@ -180,13 +180,13 @@
COUNT(handle_code) triggerDayCount,
SUM(CASE WHEN (trigger_code in (0, 200) and handle_code = 0) then 1 else 0 end) as triggerDayCountRunning,
SUM(CASE WHEN handle_code = 200 then 1 else 0 end) as triggerDayCountSuc
FROM XXL_JOB_QRTZ_TRIGGER_LOG
FROM XXL_JOB_LOG
WHERE trigger_time BETWEEN #{from} and #{to}
GROUP BY triggerDay;
</select>
<delete id="clearLog" >
delete from XXL_JOB_QRTZ_TRIGGER_LOG
delete from XXL_JOB_LOG
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="jobGroup gt 0">
AND job_group = #{jobGroup}
@ -200,7 +200,7 @@
<if test="clearBeforeNum gt 0">
AND id NOT in(
SELECT id FROM(
SELECT id FROM XXL_JOB_QRTZ_TRIGGER_LOG AS t
SELECT id FROM XXL_JOB_LOG AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="jobGroup gt 0">
AND t.job_group = #{jobGroup}
@ -218,7 +218,7 @@
</delete>
<select id="findFailJobLogIds" resultType="int" >
SELECT id FROM `XXL_JOB_QRTZ_TRIGGER_LOG`
SELECT id FROM `XXL_JOB_LOG`
WHERE !(
(trigger_code in (0, 200) and handle_code = 0)
OR
@ -229,7 +229,7 @@
</select>
<update id="updateAlarmStatus" >
UPDATE XXL_JOB_QRTZ_TRIGGER_LOG
UPDATE XXL_JOB_LOG
SET
`alarm_status` = #{newAlarmStatus}
WHERE `id`= #{logId} AND `alarm_status` = #{oldAlarmStatus}

View File

@ -20,18 +20,18 @@
</sql>
<delete id="removeDead" parameterType="java.lang.Integer" >
DELETE FROM XXL_JOB_QRTZ_TRIGGER_REGISTRY
DELETE FROM XXL_JOB_REGISTRY
WHERE update_time <![CDATA[ < ]]> DATE_ADD(NOW(),INTERVAL -#{timeout} SECOND)
</delete>
<select id="findAll" parameterType="java.lang.Integer" resultMap="XxlJobRegistry">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_TRIGGER_REGISTRY AS t
FROM XXL_JOB_REGISTRY AS t
WHERE t.update_time <![CDATA[ > ]]> DATE_ADD(NOW(),INTERVAL -#{timeout} SECOND)
</select>
<update id="registryUpdate" >
UPDATE XXL_JOB_QRTZ_TRIGGER_REGISTRY
UPDATE XXL_JOB_REGISTRY
SET `update_time` = NOW()
WHERE `registry_group` = #{registryGroup}
AND `registry_key` = #{registryKey}
@ -39,12 +39,12 @@
</update>
<insert id="registrySave" >
INSERT INTO XXL_JOB_QRTZ_TRIGGER_REGISTRY( `registry_group` , `registry_key` , `registry_value`, `update_time`)
INSERT INTO XXL_JOB_REGISTRY( `registry_group` , `registry_key` , `registry_value`, `update_time`)
VALUES( #{registryGroup} , #{registryKey} , #{registryValue}, NOW())
</insert>
<delete id="registryDelete" >
DELETE FROM XXL_JOB_QRTZ_TRIGGER_REGISTRY
DELETE FROM XXL_JOB_REGISTRY
WHERE registry_group = #{registryGroup}
AND registry_key = #{registryKey}
AND registry_value = #{registryValue}

View File

@ -21,7 +21,7 @@
<select id="pageList" parameterType="java.util.HashMap" resultMap="XxlJobUser">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_USER AS t
FROM XXL_JOB_USER AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="username != null and username != ''">
AND t.username like CONCAT(CONCAT('%', #{username}), '%')
@ -36,7 +36,7 @@
<select id="pageListCount" parameterType="java.util.HashMap" resultType="int">
SELECT count(1)
FROM XXL_JOB_QRTZ_USER AS t
FROM XXL_JOB_USER AS t
<trim prefix="WHERE" prefixOverrides="AND | OR" >
<if test="username != null and username != ''">
AND t.username like CONCAT(CONCAT('%', #{username}), '%')
@ -49,12 +49,12 @@
<select id="loadByUserName" parameterType="java.util.HashMap" resultMap="XxlJobUser">
SELECT <include refid="Base_Column_List" />
FROM XXL_JOB_QRTZ_USER AS t
FROM XXL_JOB_USER AS t
WHERE t.username = #{username}
</select>
<insert id="save" parameterType="com.xxl.job.admin.core.model.XxlJobUser" useGeneratedKeys="true" keyProperty="id" >
INSERT INTO XXL_JOB_QRTZ_USER (
INSERT INTO XXL_JOB_USER (
username,
password,
role,
@ -68,7 +68,7 @@
</insert>
<update id="update" parameterType="com.xxl.job.admin.core.model.XxlJobUser" >
UPDATE XXL_JOB_QRTZ_USER
UPDATE XXL_JOB_USER
SET
<if test="password != null and password != ''">
password = #{password},
@ -80,7 +80,7 @@
<delete id="delete" parameterType="java.util.HashMap">
DELETE
FROM XXL_JOB_QRTZ_USER
FROM XXL_JOB_USER
WHERE id = #{id}
</delete>

View File

@ -1,29 +0,0 @@
# Default Properties file for use by StdSchedulerFactory
# to create a Quartz Scheduler Instance, if a different
# properties file is not explicitly specified.
#
org.quartz.scheduler.instanceName: DefaultQuartzScheduler
org.quartz.scheduler.instanceId: AUTO
org.quartz.scheduler.rmi.export: false
org.quartz.scheduler.rmi.proxy: false
org.quartz.scheduler.wrapJobExecutionInUserTransaction: false
#org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
#org.quartz.threadPool.threadCount: 5
#org.quartz.threadPool.threadPriority: 5
#org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread: true
org.quartz.jobStore.misfireThreshold: 60000
org.quartz.jobStore.maxMisfiresToHandleAtATime: 1
#org.quartz.jobStore.class: org.quartz.simpl.RAMJobStore
# for async trigger
org.quartz.threadPool.class: com.xxl.job.admin.core.quartz.XxlJobThreadPool
# for cluster
org.quartz.jobStore.tablePrefix: XXL_JOB_QRTZ_
org.quartz.jobStore.class: org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.isClustered: true
org.quartz.jobStore.clusterCheckinInterval: 5000

View File

@ -83,22 +83,16 @@ $(function() {
{ "data": 'author', "visible" : true, "width":'10%'},
{ "data": 'alarmEmail', "visible" : false},
{
"data": 'jobStatus',
"data": 'triggerStatus',
"width":'10%',
"visible" : true,
"render": function ( data, type, row ) {
// status
if (data && data != 'NONE') {
if ('NORMAL' == data) {
if (1 == data) {
return '<small class="label label-success" ><i class="fa fa-clock-o"></i>RUNNING</small>';
} else {
return '<small class="label label-warning" >ERROR('+ data +')</small>';
}
} else {
return '<small class="label label-default" ><i class="fa fa-clock-o"></i>STOP</small>';
}
return data;
}
},
@ -109,12 +103,8 @@ $(function() {
return function(){
// status
var start_stop = "";
if (row.jobStatus && row.jobStatus != 'NONE') {
if ('NORMAL' == row.jobStatus) {
if (1 == row.triggerStatus ) {
start_stop = '<button class="btn btn-primary btn-xs job_operate" _type="job_pause" type="button">'+ I18n.jobinfo_opt_stop +'</button> ';
} else {
start_stop = '<button class="btn btn-primary btn-xs job_operate" _type="job_pause" type="button">'+ I18n.jobinfo_opt_stop +'</button> ';
}
} else {
start_stop = '<button class="btn btn-primary btn-xs job_operate" _type="job_resume" type="button">'+ I18n.jobinfo_opt_start +'</button> ';
}

View File

@ -75,7 +75,7 @@
<th name="updateTime" >updateTime</th>
<th name="author" >${I18n.jobinfo_field_author}</th>
<th name="alarmEmail" >${I18n.jobinfo_field_alarmemail}</th>
<th name="jobStatus" >${I18n.system_status}</th>
<th name="triggerStatus" >${I18n.system_status}</th>
<th>${I18n.system_opt}</th>
</tr>
</thead>