All of the answers are kind of right, but no one is completely answering the specific question OP asked. I'm assuming that the output file is also being written to a 2nd S3 bucket since they are using lambda. This code also uses an in-memory object to hold everything, so that needs to be considered:
import boto3
import io
#buckets
inbucket = 'my-input-bucket'
outbucket = 'my-output-bucket'
s3 = boto3.resource('s3')
outfile = io.StringIO()
# Print out bucket names (optional)
for bucket in s3.buckets.all():
print(bucket.name)
# Pull data from everyfile in the inbucket
bucket = s3.Bucket(inbucket)
for obj in bucket.objects.all():
x = obj.get()['Body'].read().decode()
print(x)
# Generate output file and close it!
outobj = s3.Object(outbucket,'outputfile.txt')
outobj.put(Body=outfile.getvalue())
outfile.close()
Check out "Amazon S3 Storage for SQL Server Databases" for setting up new Amazon S3 buckets