Keep in mind, your needs may be different. I have included the next step of reading the CSV files and returning the data and a status code 200 in the response. My example assumes you have one or a few small csv files to process and returns a dictionary with the file name as the key and the value set to the file contents. If the archives have more than one file inside, you will need logic for handling each one. You will need to figure out what to do from here for your specific use case. Once you have the data passed to ZipFile, you can call read() on the contents. You will need to process the contents correctly, so wrap it in a BytesIO object and open it with the standard library's ZipFile, documentation here. Working on an archive in memory requires a few extra steps. Use the read() method, passing an amt argument if you are processing large files or files of unknown sizes. The response body will be an instance of a StreamingBody object which is a file like object with a few convenience functions. The code above accesses the file contents through response where response is an event triggered by S3. Make sure they exist and your bucket is in the same region as this function.".format( Input_zip = ZipFile(io.BytesIO(contents)) To open the archive, process it, and then return the contents you can do something like the following. If the function is initiated via a trigger, Lambda will suggest that you place the contents in a separate S3 location to avoid looping by accident.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |