A Guide to upload Peoplesoft file to Amazon S3 server using Shell Script

 In Alten Blogs

Use Case

For one of our clients, CSV files were generated on daily basis in Peoplesoft. These CSV files were to be sent to the AWS S3 server.

The one-way integration sending data from Peoplesoft database to the external platform. The purpose of this kind of integration is to synchronize student or alumni data into the external system for reporting and ease of signup for users. The integration can ensure that data in the external platform is kept up to date with external databases.

Integration Summary

Amazon AWS S3 has many supported libraries in every language. There were different ways provided by AWS to send a file to the AWS server, few are listed below.

  • Python Script
  • Rest API
  • Shell Script

As you can see, this is a simple HTTP PUT request to this endpoint:
https://BUCKET_NAME.s3.amazonaws.com/data-integration/file.txt

All the Amazon S3 bucket operation uses the Authorization request header to provide the authentication information. The below is an example of the Authorization header value. For more details refer to the AWS site.

Authorization: AWS4-HMAC-SHA256 
Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request, SignedHeaders=host;range;x-amz-date, Signature=fe5f80f77d5fa3beca038a248ff027d0445342fe2855ddc963176630326f1024

Below is an example of Shell script which would upload a file to the Amazon S3 bucket. This shell script is driven by AWS with few changes as per PeopleSoft.

#!/bin/bash
# This script is for exporting file into the external AWS Bucket.
# Simple argument checking
if [ $# -lt 4 ]; then
        echo "Usage: $0     "
        exit 1
fi
bucket=$1
s3_path=$2
s3Key=$3
s3Secret=$4
file_absolute=$5
file=`echo $5 | awk -F "/" '{print $NF}'`
s3_server="s3.amazonaws.com"

# Check for feed file
if [ ! -f ${file_absolute} ]; then
        echo "File not found: $5"
        exit 2
fi

resource="/${bucket}/${s3_path}/${file}"
contentType="text/plain"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`

# BEGIN TEST BLOCK
#echo "resource=$resource"
#echo "stringToSign=$stringToSign"
#echo "signature=$signature"
#echo "file=${file}"
#echo "file_absolute=${file_absolute}"
#echo "bucket.s3_server=${bucket}.${s3_server}"
#echo "dateValue=${dateValue}"
#echo "contentType=${contentType}"
#echo "s3Key:signature=${s3Key}:${signature}"
#echo "bucket.s3_server/s3_path/file=${bucket}.${s3_server}/${s3_path}/${file}"
# END TEST BLOCK

echo
echo "Curl command will be executed:"
echo
echo "curl -X PUT -T ${file_absolute} \ "
echo "    -H Host: ${bucket}.${s3_server} \ "
echo "    -H Date: ${dateValue} \ "
echo "    -H Content-Type: ${contentType} \ "
echo "    -H Authorization: AWS ${s3Key}:${signature} \ "
echo "https://${bucket}.${s3_server}/${s3_path}/${file}"

# BEGIN EXECUTE BLOCK
curl -X PUT -T "${file_absolute}" \
-H "Host: ${bucket}.${s3_server}" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.${s3_server}/${s3_path}/${file}
# END EXECUTE BLOCK

The reason we selected the Shell script approach rather than Rest API is because of its simple, quicker approach and as it does not require calculating the signature key explicitly that is required in the Authorization request header. The Shell script itself calculates the signature key. Thus, when Amazon S3 receives an authenticated request, it computes the signature and then compares it with the signature that you provided in the request and provide the authentication information if it matches.

To make this work successfully, you will need to obtain the following from the external team.

  • AWS_ID
  • AWS_KEY
  • BUCKET_NAME

And the below PeopleCode to invoke this Shell script:

method CallScript
   /+ Returns Number +/
   Local string &Location, &VAR, &SendScript, &sFile, &Para, &Bucket, &Path, &Key, &Secret, &sFilename, &run_script;
   Local number &exitCode;
   Local string &command;
   Local array of string &FNAMES;
   
   &Location = “/u11/oracle/ps_nfs/cs /";
   &SendScript = "ShellScript.sh";
   &Bucket = "BUCKET_NAME";
   &Path = "path_files";
   &Key = "AWS_ID";
   &Secret = "AWS_KEY";
   &sFilename = "File.csv";
   
   CommitWork();

   &exitCode = Exec(&Location | &SendScript | " " | &Bucket | " " | &Path | " " | &Key | " " | &Secret | " " | &Location | &sFilename, %Exec_Synchronous + %FilePath_Absolute);

If &ExitCode = 0 Then
   &successMSG = "Files moved successfully.";
Else
   &successMSG = "File not moved successful. Exit Code: " | &ExitCode;
End-If;

 Return &successMSG;
end-method;

For more information, please visit : https://www.altencalsoftlabs.com/devops/

Also, write to us at business@altencalsoftlabs.com

Was this post helpful?
Let us know, if you liked the post. Only in this way, we can improve us.
Yes1
No0
Powered by Devhats
Recent Posts

Leave a Comment

Contact Us

If you’d like us to contact you, please fill out the form.

Not readable? Change text. captcha txt