Create a new Pipeline
POST /pipelines
Authorization Header
- Bearer token: OAuth 2.0 API access token.
Parameters
- format: json or xml. This will override the Request Accept header.
- human: true or false. Causes the response to be in a structured, more human-readible form. This is useful when calling the API through curl or a browser.
- show_null: true or false. If set to "true", the response will contain also keys that are not set to a value.
- expand_all: true or false. Recursively look up referenced objects (here: user id) and embed the respective json as a nested object directly into the response.
Example
POST /pipelines/
curl -X POST -H "Authorization: Bearer um9VmyJKTPGFqpkL_THjGE5rkXqfURDYqQ8MTBVidG3PtwkfABIdx6s_z9WlFl4_j" -H "application/json" -d '{ \ "pipeline": { \ "name": "My own test Pipeline", \ "description": "My test Pipeline with input from S3 and output to my company sftp account", \ "cancel_timeout": 60, \ "default_object_timeout": "3600", \ //In seconds "input_download_dir": "s3://AWSKEY:AWSSECRET@s3.amazonaws.com/my_bucket/", \ "image_download_dir": "https://username:password@mycompanyserver.com/my_logos/", \ "download_options": { \ "https_certificate_check": false \ }, \ "output_upload_dir": "s3://AWSKEY:AWSSECRET@s3.amazonaws.com/my_bucket/", \ "image_upload_dir": "sftp://username:password@mycompanyserver.com/my_folder/", \ "upload_options": { \ "s3_allow_read": "everyone", \ "s3_reduced_redundancy": true \ }, \ "store_options": { \ "mode": "temporary" \ }, \ "ping_urls": { \ "error": "https://mycompany.com/ping/on_error", \ "warning": "https://mycompany.com/ping/on_warning", \ "started": "https://mycompany.com/ping/start_job", \ "finished": "https://mycompany.com/ping/finish_job" \ } \ } \ }' "https://api.xvid.com/v1/pipelines/?human=true&show_null=true"
Valid Fields:
- name: Name of the new pipeline to be created. Should be unique to make it easily distinguishable. But uniqueness is not enforced.
- description: Human-readable description of what the pipeline does and what it is good for.
- type (optional): Type of the pipeline to be created. Currently, the only valid value is "default" (best-effort pipeline). In the future, there may be special pipeline types with certain SLA attached to them (e.g. a guaranteed throughput).
- cancel_timeout (optional): The cancel_timeout sets an upper limit to how long a job should at max remain in "QUEUED" state on the pipeline. The job will be auto-cancelled if not processed within this "cancel_timeout" seconds. Default is 0, meaning "no timeout and never auto-cancel".
- input_download_dir (optional): The input download directive specifies the base_uri of an external location from where to download input videos for conversion. Filenames and paths given as part of Jobs submitted on the Pipeline will by default be treated as relative against the input_download_dir. Setting an input_download_dir is optional. If not set, input files specified in Job submissions on the queue are assumed to be already on the internal store (so having been uploaded to the store directly before Job submission using a call to /files/uploads). The input_download_dir directive supports S3, SFTP and HTTPS remote locations and the URIs must be given in the format as shown below:
- S3 account: URL must be in the form: s3://access_key:secret_key@your_s3_region/bucketname/.
- SFTP account: URL given must be in the form: sftp://login:password@host:port/path (if the path part of the URL starts with ~/ the path is considered relative to home directory of the user or the path is considered absolute path)
- HTTPS account: https://login:password@host:port/
- image_download_dir (optional): A special download directive from where to obtain images like logos to embed into the output videos during conversion. Format is the same than for input_download_dir. Setting image_download_dir is optional. If not set, images are assumed to be in the same dir than regular input videos (whereever that is - as determined from input_download_dir). Currently, embedding logos into converted video is not yet implemented. Hence, this option has no effect.
- download_options:https_certificate_check (optional): Controls whether it's verified that the remote server has a valid SSL certificate before commencing to download external input files. Default: true. With it, Jobs that attempt to download files from insecure HTTPS locations will fail. So set to false if you want to use self-signed certificates on your server hosting input files.
- output_upload_dir (optional): Remote location where to upload the converted output video to. Can be a SFTP server or your Amazon S3 account. URL must be formatted in the same way than defined for input_download_dir. If not set, output files will be stored just on the internal store.
- image_upload_dir (optional): Special remote location where to upload the generated thumbnails belonging to the converted output videos. Can be a SFTP server or your Amazon S3 account. URL must be formatted in the same way than defined for input_download_dir. If not set, output thumbnails will be stored in the same location than regular output video files (so "output_upload_dir" settings apply).
- upload_options:s3_allow_read (optional): Set an ACL on the uploaded objects to enable cross-account read access to specified users. Can be set to either "everyone" to make uploaded files public or an Amazon Canonical UserID provided as a string, which grants this UserID read acces on the uploaded object. If not set, the default behavior is "off", so no ACL is set. Only "off" is currently implemented.
- s3_reduced_redundany (optional): Enable reduced redundancy storage on any converted output file uploaded to S3 through this pipeline. Default: false. Currently, only "false" is supported.
- store_options:mode (optional): Select the storage mode of the internal store. Default is: "temporary". Currently, only temporary is implemented. Later, further modes like "persistent" or "cdn" may be added. -> To be replaced with new "file_timeout" option to control temporary vs. persistent storage behavior of files.
- ping_urls:started (optional): Sets the HTTP/HTTPS URL that jobs/backtraces posted on the Pipeline will ping once the task got started. The ping request is always a POST containing the job json (or backtrace json) as the request body. Request params: event_type (with value "started"), timestamp (UNIX epoch time), payload_type (either "job" or "backtrace").
- ping_urls:finished (optional): Sets the HTTP/HTTPS URL that jobs/backtraces posted on the Pipeline will ping once the task gets finished successfully. The ping request is always a POST containing the job json (or backtrace json) as the request body. Request params: event_type (with value "finished"), timestamp (UNIX epoch time), payload_type (either "job" or "backtrace").
- ping_urls:error (optional): Sets the HTTP/HTTPS URL that jobs/backtraces posted on the Pipeline will ping in case the task errored and failed to complete. The ping request is always a POST containing the job json (or backtrace json) as the request body. Request params: event_type (with value "error"), timestamp (UNIX epoch time), payload_type (either "job" or "backtrace").
- ping_urls:warning (optional): Sets the HTTP/HTTPS URL that jobs/backtraces posted on the Pipeline will ping in case the task completed but a warning occured. The ping request is always a POST containing the job json (or backtrace json) as the request body. Request params: event_type (with value "warning"), timestamp (UNIX epoch time), payload_type (either "job" or "backtrace").
Overview
Content Tools