We've covered a lot of how you can use PowerShell with MOVEit Automation 2018 to automate tasks. However, what if you want to use Python instead? We got you covered. Let's take a look.
In a recent article on using the REST API for MOVEit Automation 2018, we used PowerShell as the client. Let’s take a different angle this time and learn how to use the Python scripting language to do the same tasks.
We will use the same task as an example as we have done with the PowerShell article to ensure we’re comparing apples to apples; creating and monitoring tasks. Our task starts with some files in a folder on my MOVEit 2018 server. I’d like to upload all of these files to an Amazon S3 Bucket. Let’s see how this task can be created and started using Python.
Authentication
Related to a previous article on how to authenticate to the MOVEit Automation REST API, I’ll use that same approach, but this time I’ll use Python instead of PowerShell.
First, I’ll be sure to define all of the variables representing the various URLs, paths and authentication information I’ll need later on.
base_uri = 'https://<yourmoveitautomationservernamehere>'
token_path = '/webadmin/api/v1/token'
uri_path = '/webadmin/api/v1/tasks'
api_endpoint = base_uri + token_path
tasks_url = base_uri + uri_path
userName = '<myusername>'
password = '<mypassword>'
Next, I’ll need to get an authentication token. Python has a module called requests we can use that will send an HTTP GET request to the API. We’ll have to import that. Snce I’m working with a MOVEit server that has a self-signed certificate, I need to tell Python to ignore the security warning I’ll get using the disable_warnings() method in the urllib3 library.
Finally, I’ll create a dictionary to hold all of the authentication attributes I need to pass to the server when the request is performed.
import requests
# Is this a self signed cert enable this to ignore SSL errors
requests.packages.urllib3.disable_warnings()
payload = {'username': userName, 'password': password, 'grant_type': 'password'}
Now that we’re prepared to send the request, I’ll go ahead and do that to assign token retrieved to the my_token variable.
r = requests.post(api_endpoint, data=payload, verify=False)
#Auth token to be used by successive requests
my_token = r.json()['access_token']
Creating the Task
By looking at the Swagger UI in MOVEit, I spot an operation that looks exactly like what I need. The description of “adds a new task” is pretty self-explanatory.
As I click on it, I see that I’m going to have to provide the API with some JSON representing the task I’d like to create. Using the example JSON provided, I’ve managed to come up with some JSON representing the task I’d like to do.
{
"Info": {
"Description": "This task moves some files locally to an AWS S3 bucket on demand."
},
"steps": [
{
"Source": {
"HostID": "0",
"Path": "C:\\Temp",
"Type": "FileSystem"
}
},
{
"Destination": {
"HostID": "730111199",
"Type": "S3"
}
}
],
"Name": "Copy Files to S3 Bucket",
"Active": 1
}
Now that I have the JSON created, I’ll copy this JSON into a file called DemoTask.json on my local computer. Once I do that, I need to get this JSON into a string to eventually wrap up to send to the API.
Using the load() method on the json module, I can create a JSON object. I first need to remember to import my json module though.
import json
#convert json file to python dictionary
with open('/Users/adam/Desktop/DemoTask.json') as json_data:
job_json = json.load(json_data)
At this point, I now have everything I need to make an API call to the MOVEit API to create this request.
#Create new Task
new_task = requests.post(tasks_url, headers={'Content-Type': 'application/json;charset=UTF-8', 'Authorization': 'Bearer {}'.format(my_token)},json=job_json, verify=False)
I now should have a task created called Copy Files to S3 Bucket. To be sure, I’ll query the API using the name of the task.
task_name = 'Copy Files to S3 Bucket'
query_url = tasks_url + '?name=' + task_name
task_query = requests.get(query_url, headers={'Content-Type': 'application/json;charset=UTF-8', 'Authorization': 'Bearer {}'.format(my_token)}, verify=False)
task_json = json.loads(task_query.content)
I’ll verify this is the right task by taking a look at the response JSON and you can see that’s it!
>>> task_json
{u'Info': {u'Notes': u'', u'Description': u'This task moves some files locally to an AWS S3 bucket on demand.'}, u'Group': [], u'Name': u'Copy Files to S3 Bucket', u'UseDefStateCaching': 1, u'CacheNames': u'random', u'TT': u'', u'NextActions': {}, u'NextEID': 13, u'AR': 0, u'steps': [{u'Source': {u'UseDefRetryTimeoutSecs': 1, u'ExFo': u'', u'NewFilesOnly': 0, u'DeleteOrig': 0, u'RetryCount': 0, u'SearchSubdirs': 0, u'Type': u'FileSystem', u'Unzip': 0, u'ExFile': u'', u'Path': u'C:\\Temp', u'RetryTimeoutSecs': 0, u'HostID': u'0', u'DelRename': 1, u'FileMask': u'*.*', u'UseDefRescanSecs': 1, u'RenameTo': u'', u'ID': u'11', u'MxBy': 0, u'RescanSecs': 0, u'RetryIfNoFiles': 0, u'UseDefRetryCount': 1, u'MxFi': 0}}, {u'Destination': {u'UseDefRetryTimeoutSecs': 1, u'Zip': 0, u'OverwriteOrig': 1, u'RetryCount': 0, u'Type': u'S3', u'Username': u'', u'UseRelativeSubdirs': 1, u'UseDefBucket': 1, u'Path': u'', u'Password': u'', u'RetryTimeoutSecs': 0, u'HostID': u'730111199', u'ConnTimeoutSecs': 0, u'Bucket': u'', u'UseDefUser': 1, u'UseDefRescanSecs': 1, u'UseOrigName': 1, u'ID': u'12', u'ForceDir': 1, u'FileName': u'[OrigName]', u'RescanSecs': 0, u'UseDefConnTimeoutSecs': 1, u'UseDefRetryCount': 1}}], u'Schedules': {u'Schedule': []}, u'Active': 1, u'ID': u'218513542'}
Starting the Task
We can now start the task if we’d like by creating the appropriate start URL and using that URL to send the API call again. Below, I’m reading the content property of the response to confirm the task has started.
task_id = task_json['items'][0]['ID']
#Create start task url
start_url = "{}/{}/start".format(tasks_url, task_id)
#Initiate job
start_result = requests.post(start_url, headers={'Content-Type': 'application/json;charset=UTF-8', 'Authorization': 'Bearer {}'.format(my_token)}, verify=False)
>>> print(start_result.content)
{"nominalStart":"2018-04-25 20:07:53.54"}
Adam Bertram
Adam Bertram is a 25+ year IT veteran and an experienced online business professional. He’s a successful blogger, consultant, 6x Microsoft MVP, trainer, published author and freelance writer for dozens of publications. For how-to tech tutorials, catch up with Adam at adamtheautomator.com, connect on LinkedIn or follow him on X at @adbertram.