Short answer : the plugin won't by default encode anything. The Insert from URL step will auto encode and break things it can't recognise properly unless you turn it off ( only in 14 ), but the plugin just sends what you tell it to.
But you're confusing the URL with the data ( URL parameters ) in what you say here :
BaseElements appears to automatically encode the upload URL(my web guy says it's required to do a POST with attachments) but when I try the same with delete, it does not encode the URL parameters, so I have to do GetAsURLEncoded() on each variable before sending off the delete request. Conversely, if I do GetAsURLEncoded() on my upload URL parameters, it fails in a similar fashion.
You need to encode data, not parameters, and the URL shouldn't require encoding, nor do the variable names. So if you're sending
then you need to do :
"name1=" & GetAsURLEncoded ( data1 ) & "&name2=" & GetAsURLEncoded ( data2 )
Nothing else should be encoded, the & and = characters need to be passed as is, as they're part of the data structure, not data.
It's nice to get an authoritative answer from the author himself. Good to know that it will take whatever we give it; that gives us a base to work off of and figure things out on our end then.
Maybe I wasn't clear regarding the parameters vs. the data before, but we are indeed only encoding the data portions and NOT the rest of the URL / parameter structure.
The issue remains… if I encode our AWS secret (which includes a + sign) on upload it fails, but if I leave it alone, it works. We'll take another look at our PHP endpoint code.
I'm one of HazMatt's web guys. I've been digging in to this today because we were seeing inconsistent behavior between our API calls.
This may be an automatic libcurl thing, but it looks like if there is a file upload, then the request
is sent with Content-Type: multipart/form-data. If there is no file, then it uses Content-Type/application/x-www-form-urlencoded
Since BE doesn't auto auto encode parameters this leads to some inconsistency when posting values with special characters. For example, our Amazon keys have plus signs in them.
When posting a file to our API with our AWS keys, we pass in the key not encoded. When it is sent as multipart/form-data the server expects to receive it in unencoded form (which it is).
When deleting a file we also pass in our AWS key, but this time since the post Content-Type is x-www-form-urlencoded (as set by libcurl, maybe?) the server expects posted values to be urlencoded and automatically decodes them for us. This results in "OUR_SECRET+KEY" being converted to "OUR_SECRET KEY" if we don't encode it.
This StackOverflow post seems to be related: http://stackoverflow.com/questions/6603928/should-i-url-encode-post-data
The answer says:
"...A value of "application/x-www-form-urlencoded" means that your POST body will need to be URL encoded just like a GET parameter string. A value of "multipart/form-data" means that you'll be using content delimiters and NOT url encoding the content."
So at the moment, when using BE_HTTP_POST, developers will need to know if they are posting a file or not and so determine if they need to encode the parameters or not. They'll know this and can do so, of course, but it would be nice if it were consistent.
At the very least it would be nice to have this documented.
Thanks for the details, I've been looking into it. You're right, the plugin auto switches the content type based on whether or not there's a file attached. And, as per your investigation, it doesn't need encoding for multi-part form data.
But that leaves us with an issue, as then the plugin looks for & characters to separate each of the name value pairs ( part of having all the data in a single text block ). And so if you're not encoding, then there's no way to send data that contains either an & or an = sign...
I think an answer is to make it that you need to encode data all the time in the plugin, but that the plugin unencodes data when it sends via multipart/form.
I'll keep investigating and see what we can come up with.
Can we talk about this a bit more? I get instructions which state that I must submit a JSON format that includes arrays, so it seems a bit harder to follow your instructions, which appear to require "flat" sequences of key value pairs.
From an example supplied to me:
Not sure if this helps, as this only uploads content too, but I have a working version of this using the bBox plug-in. The current implementation looks as below.
The key & somewhat tricky part is that the full operation must be encoded. Notice that the bucket name, timestamp, and yes, encoding type, must all be signed. Even knowing all this it was tricky to get everything right.
// Need date with time zone, we'll use GMT.
date_value = bBox_Bash (0; "echo -n $(LC_ALL=C date -u +'%a, %d %b %Y %X %z')");
bucket_slash = Position (s3_path; "/"; 1; 1);
bucket = Left (s3_path; If (bucket_slash; bucket_slash -1; Length (s3_path)));
sub_path = If (bucket_slash;
Middle (s3_path; bucket_slash + 1; 999);
// For URL and key, we want just the file's name.
file_name = Middle (source_path; Position (source_path; "/"; Length (source_path); -1) + 1; 999);
newline = Char (10);
resource="/" & bucket & "/" & sub_path & file_name;
string_to_sign = "PUT\n\n" & content_type & "\n" & date_value & "\n" & resource;
// We must sign the action later with our key.
signature = Substitute (bBox_Bash (0; "echo -en $1 | openssl sha1 -hmac $2 -binary | base64"; "-s"; string_to_sign; aws_secret_access_key); ¶; "")
"-T" & source_path;
"-HHost: " & bucket & ".s3.amazonaws.com";
"-HDate: " & date_value;
"-HContent-Type: " & content_type;
"-HAuthorization: AWS " & aws_access_key_id & ":" & signature;
"https://" & bucket & ".s3.amazonaws.com/" & sub_path & file_name
& "curl " & "-v" & " -XPUT" & " -T" & source_path & " -H'Host: " & bucket & ".s3.amazonaws.com'" & " -H'Date: " & date_value & "' -H'Content-Type: " & content_type & "' -H'Authorization: AWS " & aws_access_key_id & ":" & signature & "' https://" & bucket & ".s3.amazonaws.com/" & sub_path & file_name
content_type: the MIME type for data; for text this could be "text/plain"
file_path: the POSIX path of file to send
Based on info example at http://geek.co.il/2014/05/26/script-day-upload-files-to-amazon-s3-using-bash
Path is assumed to contain at least one /.
2016-08-02 simon_b: created function
2016-08-11 simon_b: now allow specifying folder path inside of bucket
aws_access_key_id: the S3 "account" you use for accessing the bucket
aws_secret_access_key: the S3 "password" for access
s3_path: this may be just the bucket name, or the bucket with folder path (but not file name)
content_type: mime-type to use for data
source_path: the POSIX path to the file to push