标签归档:boto

如何使用boto3将S3对象保存到文件

问题:如何使用boto3将S3对象保存到文件

我正在尝试使用适用于AWS的新boto3客户端做一个“ hello world” 。

我的用例非常简单:从S3获取对象并将其保存到文件中。

在boto 2.XI中,它应该是这样的:

import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')

在boto 3中。我找不到一种干净的方法来做同样的事情,所以我手动遍历了“ Streaming”对象:

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    chunk = key['Body'].read(1024*8)
    while chunk:
        f.write(chunk)
        chunk = key['Body'].read(1024*8)

要么

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    for chunk in iter(lambda: key['Body'].read(4096), b''):
        f.write(chunk)

而且效果很好。我想知道是否有任何“本机” boto3函数可以完成相同的任务?

I’m trying to do a “hello world” with new boto3 client for AWS.

The use-case I have is fairly simple: get object from S3 and save it to the file.

In boto 2.X I would do it like this:

import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')

In boto 3 . I can’t find a clean way to do the same thing, so I’m manually iterating over the “Streaming” object:

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    chunk = key['Body'].read(1024*8)
    while chunk:
        f.write(chunk)
        chunk = key['Body'].read(1024*8)

or

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    for chunk in iter(lambda: key['Body'].read(4096), b''):
        f.write(chunk)

And it works fine. I was wondering is there any “native” boto3 function that will do the same task?


回答 0

Boto3最近有一项自定义功能,可以帮助您(其中包括其他方面)。当前,它在低级S3客户端上公开,可以这样使用:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')

# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())

这些功能将自动处理读/写文件,以及并行并行处理大文件。

请注意,s3_client.download_file不会创建目录。可以将其创建为pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True)

There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')

# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())

These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.

Note that s3_client.download_file won’t create a directory. It can be created as pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True).


回答 1

boto3现在具有比客户端更好的界面:

resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)

就其本身而言,它并没有比client接受的答案好得多(尽管文档说它在失败时重试上载和下载做得更好),但考虑到资源通常更符合人体工程学(例如,s3 存储桶对象资源)比客户端方法更好),这确实使您可以停留在资源层而不必下拉。

Resources 通常,可以使用与客户端相同的方式来创建它们,并且它们采用全部或大部分相同的参数,然后将其转发给其内部客户端。

boto3 now has a nicer interface than the client:

resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)

This by itself isn’t tremendously better than the client in the accepted answer (although the docs say that it does a better job retrying uploads and downloads on failure) but considering that resources are generally more ergonomic (for example, the s3 bucket and object resources are nicer than the client methods) this does allow you to stay at the resource layer without having to drop down.

Resources generally can be created in the same way as clients, and they take all or most of the same arguments and just forward them to their internal clients.


回答 2

对于那些想模拟set_contents_from_string类似boto2方法的人,您可以尝试

import boto3
from cStringIO import StringIO

s3c = boto3.client('s3')
contents = 'My string to save to S3 object'
target_bucket = 'hello-world.by.vor'
target_file = 'data/hello.txt'
fake_handle = StringIO(contents)

# notice if you do fake_handle.read() it reads like a file handle
s3c.put_object(Bucket=target_bucket, Key=target_file, Body=fake_handle.read())

对于Python3:

在python3中,StringIO和cStringIO都消失了StringIO像这样使用导入:

from io import StringIO

要同时支持两个版本:

try:
   from StringIO import StringIO
except ImportError:
   from io import StringIO

For those of you who would like to simulate the set_contents_from_string like boto2 methods, you can try

import boto3
from cStringIO import StringIO

s3c = boto3.client('s3')
contents = 'My string to save to S3 object'
target_bucket = 'hello-world.by.vor'
target_file = 'data/hello.txt'
fake_handle = StringIO(contents)

# notice if you do fake_handle.read() it reads like a file handle
s3c.put_object(Bucket=target_bucket, Key=target_file, Body=fake_handle.read())

For Python3:

In python3 both StringIO and cStringIO are gone. Use the StringIO import like:

from io import StringIO

To support both version:

try:
   from StringIO import StringIO
except ImportError:
   from io import StringIO

回答 3

# Preface: File is json with contents: {'name': 'Android', 'status': 'ERROR'}

import boto3
import io

s3 = boto3.resource('s3')

obj = s3.Object('my-bucket', 'key-to-file.json')
data = io.BytesIO()
obj.download_fileobj(data)

# object is now a bytes string, Converting it to a dict:
new_dict = json.loads(data.getvalue().decode("utf-8"))

print(new_dict['status']) 
# Should print "Error"
# Preface: File is json with contents: {'name': 'Android', 'status': 'ERROR'}

import boto3
import io

s3 = boto3.resource('s3')

obj = s3.Object('my-bucket', 'key-to-file.json')
data = io.BytesIO()
obj.download_fileobj(data)

# object is now a bytes string, Converting it to a dict:
new_dict = json.loads(data.getvalue().decode("utf-8"))

print(new_dict['status']) 
# Should print "Error"

回答 4

当您想要读取与默认配置不同的文件时,请mpu.aws.s3_download(s3path, destination)直接使用或复制粘贴的代码:

def s3_download(source, destination,
                exists_strategy='raise',
                profile_name=None):
    """
    Copy a file from an S3 source to a local destination.

    Parameters
    ----------
    source : str
        Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
    destination : str
    exists_strategy : {'raise', 'replace', 'abort'}
        What is done when the destination already exists?
    profile_name : str, optional
        AWS profile

    Raises
    ------
    botocore.exceptions.NoCredentialsError
        Botocore is not able to find your credentials. Either specify
        profile_name or add the environment variables AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
        See https://boto3.readthedocs.io/en/latest/guide/configuration.html
    """
    exists_strategies = ['raise', 'replace', 'abort']
    if exists_strategy not in exists_strategies:
        raise ValueError('exists_strategy \'{}\' is not in {}'
                         .format(exists_strategy, exists_strategies))
    session = boto3.Session(profile_name=profile_name)
    s3 = session.resource('s3')
    bucket_name, key = _s3_path_split(source)
    if os.path.isfile(destination):
        if exists_strategy is 'raise':
            raise RuntimeError('File \'{}\' already exists.'
                               .format(destination))
        elif exists_strategy is 'abort':
            return
    s3.Bucket(bucket_name).download_file(key, destination)

from collections import namedtuple

S3Path = namedtuple("S3Path", ["bucket_name", "key"])


def _s3_path_split(s3_path):
    """
    Split an S3 path into bucket and key.

    Parameters
    ----------
    s3_path : str

    Returns
    -------
    splitted : (str, str)
        (bucket, key)

    Examples
    --------
    >>> _s3_path_split('s3://my-bucket/foo/bar.jpg')
    S3Path(bucket_name='my-bucket', key='foo/bar.jpg')
    """
    if not s3_path.startswith("s3://"):
        raise ValueError(
            "s3_path is expected to start with 's3://', " "but was {}"
            .format(s3_path)
        )
    bucket_key = s3_path[len("s3://"):]
    bucket_name, key = bucket_key.split("/", 1)
    return S3Path(bucket_name, key)

When you want to read a file with a different configuration than the default one, feel free to use either mpu.aws.s3_download(s3path, destination) directly or the copy-pasted code:

def s3_download(source, destination,
                exists_strategy='raise',
                profile_name=None):
    """
    Copy a file from an S3 source to a local destination.

    Parameters
    ----------
    source : str
        Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
    destination : str
    exists_strategy : {'raise', 'replace', 'abort'}
        What is done when the destination already exists?
    profile_name : str, optional
        AWS profile

    Raises
    ------
    botocore.exceptions.NoCredentialsError
        Botocore is not able to find your credentials. Either specify
        profile_name or add the environment variables AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
        See https://boto3.readthedocs.io/en/latest/guide/configuration.html
    """
    exists_strategies = ['raise', 'replace', 'abort']
    if exists_strategy not in exists_strategies:
        raise ValueError('exists_strategy \'{}\' is not in {}'
                         .format(exists_strategy, exists_strategies))
    session = boto3.Session(profile_name=profile_name)
    s3 = session.resource('s3')
    bucket_name, key = _s3_path_split(source)
    if os.path.isfile(destination):
        if exists_strategy is 'raise':
            raise RuntimeError('File \'{}\' already exists.'
                               .format(destination))
        elif exists_strategy is 'abort':
            return
    s3.Bucket(bucket_name).download_file(key, destination)

from collections import namedtuple

S3Path = namedtuple("S3Path", ["bucket_name", "key"])


def _s3_path_split(s3_path):
    """
    Split an S3 path into bucket and key.

    Parameters
    ----------
    s3_path : str

    Returns
    -------
    splitted : (str, str)
        (bucket, key)

    Examples
    --------
    >>> _s3_path_split('s3://my-bucket/foo/bar.jpg')
    S3Path(bucket_name='my-bucket', key='foo/bar.jpg')
    """
    if not s3_path.startswith("s3://"):
        raise ValueError(
            "s3_path is expected to start with 's3://', " "but was {}"
            .format(s3_path)
        )
    bucket_key = s3_path[len("s3://"):]
    bucket_name, key = bucket_key.split("/", 1)
    return S3Path(bucket_name, key)

回答 5

注意:我假设您已经分别配置了身份验证。下面的代码是从S3存储桶下载单个对象。

import boto3

#initiate s3 client 
s3 = boto3.resource('s3')

#Download object to the file    
s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')

Note: I’m assuming you have configured authentication separately. Below code is to download the single object from the S3 bucket.

import boto3

#initiate s3 client 
s3 = boto3.resource('s3')

#Download object to the file    
s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')

如何使用Boto将文件上传到S3存储桶中的目录

问题:如何使用Boto将文件上传到S3存储桶中的目录

我想使用python在s3存储桶中复制文件。

例如:我的存储桶名称=测试。在存储桶中,我有2个文件夹名称为“ dump”和“ input”。现在,我想使用python将文件从本地目录复制到S3“转储”文件夹…有人可以帮助我吗?

I want to copy a file in s3 bucket using python.

Ex : I have bucket name = test. And in the bucket, I have 2 folders name “dump” & “input”. Now I want to copy a file from local directory to S3 “dump” folder using python… Can anyone help me?


回答 0

试试这个…

import boto
import boto.s3
import sys
from boto.s3.key import Key

AWS_ACCESS_KEY_ID = ''
AWS_SECRET_ACCESS_KEY = ''

bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY)


bucket = conn.create_bucket(bucket_name,
    location=boto.s3.connection.Location.DEFAULT)

testfile = "replace this with an actual filename"
print 'Uploading %s to Amazon S3 bucket %s' % \
   (testfile, bucket_name)

def percent_cb(complete, total):
    sys.stdout.write('.')
    sys.stdout.flush()


k = Key(bucket)
k.key = 'my test file'
k.set_contents_from_filename(testfile,
    cb=percent_cb, num_cb=10)

[更新]我不是pythonist,所以感谢您对import语句的注意。另外,我不建议将凭据放入您自己的源代码中。如果您在AWS内部运行此代码,请使用带有实例配置文件(http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html)的IAM凭证,并在其中保留相同的行为。您的开发/测试环境,请使用类似AdRoll的Hologram(https://github.com/AdRoll/hologram

Try this…

import boto
import boto.s3
import sys
from boto.s3.key import Key

AWS_ACCESS_KEY_ID = ''
AWS_SECRET_ACCESS_KEY = ''

bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY)


bucket = conn.create_bucket(bucket_name,
    location=boto.s3.connection.Location.DEFAULT)

testfile = "replace this with an actual filename"
print 'Uploading %s to Amazon S3 bucket %s' % \
   (testfile, bucket_name)

def percent_cb(complete, total):
    sys.stdout.write('.')
    sys.stdout.flush()


k = Key(bucket)
k.key = 'my test file'
k.set_contents_from_filename(testfile,
    cb=percent_cb, num_cb=10)

[UPDATE] I am not a pythonist, so thanks for the heads up about the import statements. Also, I’d not recommend placing credentials inside your own source code. If you are running this inside AWS use IAM Credentials with Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html), and to keep the same behaviour in your Dev/Test environment, use something like Hologram from AdRoll (https://github.com/AdRoll/hologram)


回答 1

无需使其变得如此复杂:

s3_connection = boto.connect_s3()
bucket = s3_connection.get_bucket('your bucket name')
key = boto.s3.key.Key(bucket, 'some_file.zip')
with open('some_file.zip') as f:
    key.send_file(f)

No need to make it that complicated:

s3_connection = boto.connect_s3()
bucket = s3_connection.get_bucket('your bucket name')
key = boto.s3.key.Key(bucket, 'some_file.zip')
with open('some_file.zip') as f:
    key.send_file(f)

回答 2

import boto3

s3 = boto3.resource('s3')
BUCKET = "test"

s3.Bucket(BUCKET).upload_file("your/local/file", "dump/file")
import boto3

s3 = boto3.resource('s3')
BUCKET = "test"

s3.Bucket(BUCKET).upload_file("your/local/file", "dump/file")

回答 3

我用了它,实现起来很简单

import tinys3

conn = tinys3.Connection('S3_ACCESS_KEY','S3_SECRET_KEY',tls=True)

f = open('some_file.zip','rb')
conn.upload('some_file.zip',f,'my_bucket')

https://www.smore.com/labs/tinys3/

I used this and it is very simple to implement

import tinys3

conn = tinys3.Connection('S3_ACCESS_KEY','S3_SECRET_KEY',tls=True)

f = open('some_file.zip','rb')
conn.upload('some_file.zip',f,'my_bucket')

https://www.smore.com/labs/tinys3/


回答 4

from boto3.s3.transfer import S3Transfer
import boto3
#have all the variables populated which are required below
client = boto3.client('s3', aws_access_key_id=access_key,aws_secret_access_key=secret_key)
transfer = S3Transfer(client)
transfer.upload_file(filepath, bucket_name, folder_name+"/"+filename)
from boto3.s3.transfer import S3Transfer
import boto3
#have all the variables populated which are required below
client = boto3.client('s3', aws_access_key_id=access_key,aws_secret_access_key=secret_key)
transfer = S3Transfer(client)
transfer.upload_file(filepath, bucket_name, folder_name+"/"+filename)

回答 5

在具有凭据的会话中将文件上传到s3。

import boto3

session = boto3.Session(
    aws_access_key_id='AWS_ACCESS_KEY_ID',
    aws_secret_access_key='AWS_SECRET_ACCESS_KEY',
)
s3 = session.resource('s3')
# Filename - File to upload
# Bucket - Bucket to upload to (the top level directory under AWS S3)
# Key - S3 object name (can contain subdirectories). If not specified then file_name is used
s3.meta.client.upload_file(Filename='input_file_path', Bucket='bucket_name', Key='s3_output_key')

Upload file to s3 within a session with credentials.

import boto3

session = boto3.Session(
    aws_access_key_id='AWS_ACCESS_KEY_ID',
    aws_secret_access_key='AWS_SECRET_ACCESS_KEY',
)
s3 = session.resource('s3')
# Filename - File to upload
# Bucket - Bucket to upload to (the top level directory under AWS S3)
# Key - S3 object name (can contain subdirectories). If not specified then file_name is used
s3.meta.client.upload_file(Filename='input_file_path', Bucket='bucket_name', Key='s3_output_key')

回答 6

这也将起作用:

import os 
import boto
import boto.s3.connection
from boto.s3.key import Key

try:

    conn = boto.s3.connect_to_region('us-east-1',
    aws_access_key_id = 'AWS-Access-Key',
    aws_secret_access_key = 'AWS-Secrete-Key',
    # host = 's3-website-us-east-1.amazonaws.com',
    # is_secure=True,               # uncomment if you are not using ssl
    calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )

    bucket = conn.get_bucket('YourBucketName')
    key_name = 'FileToUpload'
    path = 'images/holiday' #Directory Under which file should get upload
    full_key_name = os.path.join(path, key_name)
    k = bucket.new_key(full_key_name)
    k.set_contents_from_filename(key_name)

except Exception,e:
    print str(e)
    print "error"   

This will also work:

import os 
import boto
import boto.s3.connection
from boto.s3.key import Key

try:

    conn = boto.s3.connect_to_region('us-east-1',
    aws_access_key_id = 'AWS-Access-Key',
    aws_secret_access_key = 'AWS-Secrete-Key',
    # host = 's3-website-us-east-1.amazonaws.com',
    # is_secure=True,               # uncomment if you are not using ssl
    calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )

    bucket = conn.get_bucket('YourBucketName')
    key_name = 'FileToUpload'
    path = 'images/holiday' #Directory Under which file should get upload
    full_key_name = os.path.join(path, key_name)
    k = bucket.new_key(full_key_name)
    k.set_contents_from_filename(key_name)

except Exception,e:
    print str(e)
    print "error"   

回答 7

这是三班轮。只需按照boto3文档中的说明进行操作

import boto3
s3 = boto3.resource(service_name = 's3')
s3.meta.client.upload_file(Filename = 'C:/foo/bar/baz.filetype', Bucket = 'yourbucketname', Key = 'baz.filetype')

一些重要的论据是:

参数:

  • 文件名str)-要上传的文件的路径。
  • 存储桶str)-要上传到的存储桶的名称。
  • str)-您要分配给s3存储桶中文件的的名称。该名称可以与文件名相同,也可以与您选择的名称不同,但是文件类型应保持不变。

    注意:我假设您已按照boto3文档中最佳配置做法的~\.aws建议将凭据保存在文件夹中。

  • This is a three liner. Just follow the instructions on the boto3 documentation.

    import boto3
    s3 = boto3.resource(service_name = 's3')
    s3.meta.client.upload_file(Filename = 'C:/foo/bar/baz.filetype', Bucket = 'yourbucketname', Key = 'baz.filetype')
    

    Some important arguments are:

    Parameters:

  • Filename (str) — The path to the file to upload.
  • Bucket (str) — The name of the bucket to upload to.
  • Key (str) — The name of the that you want to assign to your file in your s3 bucket. This could be the same as the name of the file or a different name of your choice but the filetype should remain the same.

    Note: I assume that you have saved your credentials in a ~\.aws folder as suggested in the best configuration practices in the boto3 documentation.


  • 回答 8

    import boto
    from boto.s3.key import Key
    
    AWS_ACCESS_KEY_ID = ''
    AWS_SECRET_ACCESS_KEY = ''
    END_POINT = ''                          # eg. us-east-1
    S3_HOST = ''                            # eg. s3.us-east-1.amazonaws.com
    BUCKET_NAME = 'test'        
    FILENAME = 'upload.txt'                
    UPLOADED_FILENAME = 'dumps/upload.txt'
    # include folders in file path. If it doesn't exist, it will be created
    
    s3 = boto.s3.connect_to_region(END_POINT,
                               aws_access_key_id=AWS_ACCESS_KEY_ID,
                               aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
                               host=S3_HOST)
    
    bucket = s3.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = UPLOADED_FILENAME
    k.set_contents_from_filename(FILENAME)
    import boto
    from boto.s3.key import Key
    
    AWS_ACCESS_KEY_ID = ''
    AWS_SECRET_ACCESS_KEY = ''
    END_POINT = ''                          # eg. us-east-1
    S3_HOST = ''                            # eg. s3.us-east-1.amazonaws.com
    BUCKET_NAME = 'test'        
    FILENAME = 'upload.txt'                
    UPLOADED_FILENAME = 'dumps/upload.txt'
    # include folders in file path. If it doesn't exist, it will be created
    
    s3 = boto.s3.connect_to_region(END_POINT,
                               aws_access_key_id=AWS_ACCESS_KEY_ID,
                               aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
                               host=S3_HOST)
    
    bucket = s3.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = UPLOADED_FILENAME
    k.set_contents_from_filename(FILENAME)
    

    回答 9

    使用boto3

    import logging
    import boto3
    from botocore.exceptions import ClientError
    
    
    def upload_file(file_name, bucket, object_name=None):
        """Upload a file to an S3 bucket
    
        :param file_name: File to upload
        :param bucket: Bucket to upload to
        :param object_name: S3 object name. If not specified then file_name is used
        :return: True if file was uploaded, else False
        """
    
        # If S3 object_name was not specified, use file_name
        if object_name is None:
            object_name = file_name
    
        # Upload the file
        s3_client = boto3.client('s3')
        try:
            response = s3_client.upload_file(file_name, bucket, object_name)
        except ClientError as e:
            logging.error(e)
            return False
        return True

    有关更多信息:-https : //boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html

    Using boto3

    import logging
    import boto3
    from botocore.exceptions import ClientError
    
    
    def upload_file(file_name, bucket, object_name=None):
        """Upload a file to an S3 bucket
    
        :param file_name: File to upload
        :param bucket: Bucket to upload to
        :param object_name: S3 object name. If not specified then file_name is used
        :return: True if file was uploaded, else False
        """
    
        # If S3 object_name was not specified, use file_name
        if object_name is None:
            object_name = file_name
    
        # Upload the file
        s3_client = boto3.client('s3')
        try:
            response = s3_client.upload_file(file_name, bucket, object_name)
        except ClientError as e:
            logging.error(e)
            return False
        return True
    

    For more:- https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html


    回答 10

    对于上传文件夹示例,如下代码和S3文件夹图片

    import boto
    import boto.s3
    import boto.s3.connection
    import os.path
    import sys    
    
    # Fill in info on data to upload
    # destination bucket name
    bucket_name = 'willie20181121'
    # source directory
    sourceDir = '/home/willie/Desktop/x/'  #Linux Path
    # destination directory name (on s3)
    destDir = '/test1/'   #S3 Path
    
    #max size in bytes before uploading in parts. between 1 and 5 GB recommended
    MAX_SIZE = 20 * 1000 * 1000
    #size of parts when uploading in parts
    PART_SIZE = 6 * 1000 * 1000
    
    access_key = 'MPBVAQ*******IT****'
    secret_key = '11t63yDV***********HgUcgMOSN*****'
    
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            host = '******.org.tw',
            is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    bucket = conn.create_bucket(bucket_name,
            location=boto.s3.connection.Location.DEFAULT)
    
    
    uploadFileNames = []
    for (sourceDir, dirname, filename) in os.walk(sourceDir):
        uploadFileNames.extend(filename)
        break
    
    def percent_cb(complete, total):
        sys.stdout.write('.')
        sys.stdout.flush()
    
    for filename in uploadFileNames:
        sourcepath = os.path.join(sourceDir + filename)
        destpath = os.path.join(destDir, filename)
        print ('Uploading %s to Amazon S3 bucket %s' % \
               (sourcepath, bucket_name))
    
        filesize = os.path.getsize(sourcepath)
        if filesize > MAX_SIZE:
            print ("multipart upload")
            mp = bucket.initiate_multipart_upload(destpath)
            fp = open(sourcepath,'rb')
            fp_num = 0
            while (fp.tell() < filesize):
                fp_num += 1
                print ("uploading part %i" %fp_num)
                mp.upload_part_from_file(fp, fp_num, cb=percent_cb, num_cb=10, size=PART_SIZE)
    
            mp.complete_upload()
    
        else:
            print ("singlepart upload")
            k = boto.s3.key.Key(bucket)
            k.key = destpath
            k.set_contents_from_filename(sourcepath,
                    cb=percent_cb, num_cb=10)

    PS:有关更多参考URL

    For upload folder example as following code and S3 folder picture

    import boto
    import boto.s3
    import boto.s3.connection
    import os.path
    import sys    
    
    # Fill in info on data to upload
    # destination bucket name
    bucket_name = 'willie20181121'
    # source directory
    sourceDir = '/home/willie/Desktop/x/'  #Linux Path
    # destination directory name (on s3)
    destDir = '/test1/'   #S3 Path
    
    #max size in bytes before uploading in parts. between 1 and 5 GB recommended
    MAX_SIZE = 20 * 1000 * 1000
    #size of parts when uploading in parts
    PART_SIZE = 6 * 1000 * 1000
    
    access_key = 'MPBVAQ*******IT****'
    secret_key = '11t63yDV***********HgUcgMOSN*****'
    
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            host = '******.org.tw',
            is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    bucket = conn.create_bucket(bucket_name,
            location=boto.s3.connection.Location.DEFAULT)
    
    
    uploadFileNames = []
    for (sourceDir, dirname, filename) in os.walk(sourceDir):
        uploadFileNames.extend(filename)
        break
    
    def percent_cb(complete, total):
        sys.stdout.write('.')
        sys.stdout.flush()
    
    for filename in uploadFileNames:
        sourcepath = os.path.join(sourceDir + filename)
        destpath = os.path.join(destDir, filename)
        print ('Uploading %s to Amazon S3 bucket %s' % \
               (sourcepath, bucket_name))
    
        filesize = os.path.getsize(sourcepath)
        if filesize > MAX_SIZE:
            print ("multipart upload")
            mp = bucket.initiate_multipart_upload(destpath)
            fp = open(sourcepath,'rb')
            fp_num = 0
            while (fp.tell() < filesize):
                fp_num += 1
                print ("uploading part %i" %fp_num)
                mp.upload_part_from_file(fp, fp_num, cb=percent_cb, num_cb=10, size=PART_SIZE)
    
            mp.complete_upload()
    
        else:
            print ("singlepart upload")
            k = boto.s3.key.Key(bucket)
            k.key = destpath
            k.set_contents_from_filename(sourcepath,
                    cb=percent_cb, num_cb=10)
    

    PS: For more reference URL


    回答 11

    xmlstr = etree.tostring(listings,  encoding='utf8', method='xml')
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            # host = '<bucketName>.s3.amazonaws.com',
            host = 'bycket.s3.amazonaws.com',
            #is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    conn.auth_region_name = 'us-west-1'
    
    bucket = conn.get_bucket('resources', validate=False)
    key= bucket.get_key('filename.txt')
    key.set_contents_from_string("SAMPLE TEXT")
    key.set_canned_acl('public-read')
    xmlstr = etree.tostring(listings,  encoding='utf8', method='xml')
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            # host = '<bucketName>.s3.amazonaws.com',
            host = 'bycket.s3.amazonaws.com',
            #is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    conn.auth_region_name = 'us-west-1'
    
    bucket = conn.get_bucket('resources', validate=False)
    key= bucket.get_key('filename.txt')
    key.set_contents_from_string("SAMPLE TEXT")
    key.set_canned_acl('public-read')
    

    回答 12

    我觉得有些东西还需要点命令:

    import boto3
    from pprint import pprint
    from botocore.exceptions import NoCredentialsError
    
    
    class S3(object):
        BUCKET = "test"
        connection = None
    
        def __init__(self):
            try:
                vars = get_s3_credentials("aws")
                self.connection = boto3.resource('s3', 'aws_access_key_id',
                                                 'aws_secret_access_key')
            except(Exception) as error:
                print(error)
                self.connection = None
    
    
        def upload_file(self, file_to_upload_path, file_name):
            if file_to_upload is None or file_name is None: return False
            try:
                pprint(file_to_upload)
                file_name = "your-folder-inside-s3/{0}".format(file_name)
                self.connection.Bucket(self.BUCKET).upload_file(file_to_upload_path, 
                                                                          file_name)
                print("Upload Successful")
                return True
    
            except FileNotFoundError:
                print("The file was not found")
                return False
    
            except NoCredentialsError:
                print("Credentials not available")
                return False
    
    

    这里有三个重要的变量,BUCKET const,file_to_uploadfile_name

    BUCKET:是您的S3存储桶的名称

    file_to_upload_path:必须是您要上传的文件的路径

    file_name:是存储桶中生成的文件和路径(这是您添加文件夹或其他内容的位置)

    有很多方法,但是您可以在这样的另一个脚本中重用此代码

    import S3
    
    def some_function():
        S3.S3().upload_file(path_to_file, final_file_name)

    I have something that seems to me has a bit more order:

    import boto3
    from pprint import pprint
    from botocore.exceptions import NoCredentialsError
    
    
    class S3(object):
        BUCKET = "test"
        connection = None
    
        def __init__(self):
            try:
                vars = get_s3_credentials("aws")
                self.connection = boto3.resource('s3', 'aws_access_key_id',
                                                 'aws_secret_access_key')
            except(Exception) as error:
                print(error)
                self.connection = None
    
    
        def upload_file(self, file_to_upload_path, file_name):
            if file_to_upload is None or file_name is None: return False
            try:
                pprint(file_to_upload)
                file_name = "your-folder-inside-s3/{0}".format(file_name)
                self.connection.Bucket(self.BUCKET).upload_file(file_to_upload_path, 
                                                                          file_name)
                print("Upload Successful")
                return True
    
            except FileNotFoundError:
                print("The file was not found")
                return False
    
            except NoCredentialsError:
                print("Credentials not available")
                return False
    
    
    

    There’re three important variables here, the BUCKET const, the file_to_upload and the file_name

    BUCKET: is the name of your S3 bucket

    file_to_upload_path: must be the path from file you want to upload

    file_name: is the resulting file and path in your bucket (this is where you add folders or what ever)

    There’s many ways but you can reuse this code in another script like this

    import S3
    
    def some_function():
        S3.S3().upload_file(path_to_file, final_file_name)
    

    如何使用boto3将文件或数据写入S3对象

    问题:如何使用boto3将文件或数据写入S3对象

    在boto 2中,可以使用以下方法写入S3对象:

    是否有boto 3等效项?将数据保存到S3上存储的对象的boto3方法是什么?

    In boto 2, you can write to an S3 object using these methods:

    Is there a boto 3 equivalent? What is the boto3 method for saving data to an object stored on S3?


    回答 0

    在Boto 3中,“ Key.set_contents_from_”方法被替换为

    例如:

    import boto3
    
    some_binary_data = b'Here we have some data'
    more_binary_data = b'Here we have some more data'
    
    # Method 1: Object.put()
    s3 = boto3.resource('s3')
    object = s3.Object('my_bucket_name', 'my/key/including/filename.txt')
    object.put(Body=some_binary_data)
    
    # Method 2: Client.put_object()
    client = boto3.client('s3')
    client.put_object(Body=more_binary_data, Bucket='my_bucket_name', Key='my/key/including/anotherfilename.txt')

    另外,二进制数据可以来自读取文件,如官方文档中比较boto 2和boto 3所述

    储存资料

    从文件,流或字符串存储数据很容易:

    # Boto 2.x
    from boto.s3.key import Key
    key = Key('hello.txt')
    key.set_contents_from_file('/tmp/hello.txt')
    
    # Boto 3
    s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))

    In boto 3, the ‘Key.set_contents_from_’ methods were replaced by

    For example:

    import boto3
    
    some_binary_data = b'Here we have some data'
    more_binary_data = b'Here we have some more data'
    
    # Method 1: Object.put()
    s3 = boto3.resource('s3')
    object = s3.Object('my_bucket_name', 'my/key/including/filename.txt')
    object.put(Body=some_binary_data)
    
    # Method 2: Client.put_object()
    client = boto3.client('s3')
    client.put_object(Body=more_binary_data, Bucket='my_bucket_name', Key='my/key/including/anotherfilename.txt')
    

    Alternatively, the binary data can come from reading a file, as described in the official docs comparing boto 2 and boto 3:

    Storing Data

    Storing data from a file, stream, or string is easy:

    # Boto 2.x
    from boto.s3.key import Key
    key = Key('hello.txt')
    key.set_contents_from_file('/tmp/hello.txt')
    
    # Boto 3
    s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
    

    回答 1

    boto3还有一种直接上传文件的方法:

    s3.Bucket('bucketname').upload_file('/local/file/here.txt','folder/sub/path/to/s3key')

    http://boto3.readthedocs.io/zh_CN/latest/reference/services/s3.html#S3.Bucket.upload_file

    boto3 also has a method for uploading a file directly:

    s3.Bucket('bucketname').upload_file('/local/file/here.txt','folder/sub/path/to/s3key')
    

    http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.upload_file


    回答 2

    在S3中写入文件之前,您不再需要将内容转换为二进制文件。以下示例在具有字符串内容的S3存储桶中创建一个新的文本文件(称为newfile.txt):

    import boto3
    
    s3 = boto3.resource(
        's3',
        region_name='us-east-1',
        aws_access_key_id=KEY_ID,
        aws_secret_access_key=ACCESS_KEY
    )
    content="String content to write to a new S3 file"
    s3.Object('my-bucket-name', 'newfile.txt').put(Body=content)

    You no longer have to convert the contents to binary before writing to the file in S3. The following example creates a new text file (called newfile.txt) in an S3 bucket with string contents:

    import boto3
    
    s3 = boto3.resource(
        's3',
        region_name='us-east-1',
        aws_access_key_id=KEY_ID,
        aws_secret_access_key=ACCESS_KEY
    )
    content="String content to write to a new S3 file"
    s3.Object('my-bucket-name', 'newfile.txt').put(Body=content)
    

    回答 3

    这是一个从s3读取JSON的好技巧:

    import json, boto3
    s3 = boto3.resource("s3").Bucket("bucket")
    json.load_s3 = lambda f: json.load(s3.Object(key=f).get()["Body"])
    json.dump_s3 = lambda obj, f: s3.Object(key=f).put(Body=json.dumps(obj))

    现在你可以使用json.load_s3json.dump_s3使用相同的API loaddump

    data = {"test":0}
    json.dump_s3(data, "key") # saves json to s3://bucket/key
    data = json.load_s3("key") # read json from s3://bucket/key

    Here’s a nice trick to read JSON from s3:

    import json, boto3
    s3 = boto3.resource("s3").Bucket("bucket")
    json.load_s3 = lambda f: json.load(s3.Object(key=f).get()["Body"])
    json.dump_s3 = lambda obj, f: s3.Object(key=f).put(Body=json.dumps(obj))
    

    Now you can use json.load_s3 and json.dump_s3 with the same API as load and dump

    data = {"test":0}
    json.dump_s3(data, "key") # saves json to s3://bucket/key
    data = json.load_s3("key") # read json from s3://bucket/key
    

    回答 4

    简洁明了的版本,可用于将文件动态上传到给定的S3存储桶和子文件夹-

    import boto3
    
    BUCKET_NAME = 'sample_bucket_name'
    PREFIX = 'sub-folder/'
    
    s3 = boto3.resource('s3')
    
    # Creating an empty file called "_DONE" and putting it in the S3 bucket
    s3.Object(BUCKET_NAME, PREFIX + '_DONE').put(Body="")

    注意:您应始终将AWS凭证(aws_access_key_idaws_secret_access_key)放在单独的文件中,例如-~/.aws/credentials

    A cleaner and concise version which I use to upload files on the fly to a given S3 bucket and sub-folder-

    import boto3
    
    BUCKET_NAME = 'sample_bucket_name'
    PREFIX = 'sub-folder/'
    
    s3 = boto3.resource('s3')
    
    # Creating an empty file called "_DONE" and putting it in the S3 bucket
    s3.Object(BUCKET_NAME, PREFIX + '_DONE').put(Body="")
    

    Note: You should ALWAYS put your AWS credentials (aws_access_key_id and aws_secret_access_key) in a separate file, for example- ~/.aws/credentials


    回答 5

    值得一提boto3用作后端的智能开放

    smart-open是一个下拉更换为Python的open,可以从打开的文件s3,以及ftphttp和许多其他协议。

    例如

    from smart_open import open
    import json
    with open("s3://your_bucket/your_key.json", 'r') as f:
        data = json.load(f)

    aws凭证通过boto3凭证(通常是~/.aws/dir中的文件或环境变量)加载。

    it is worth mentioning smart-open that uses boto3 as a back-end.

    smart-open is a drop-in replacement for python’s open that can open files from s3, as well as ftp, http and many other protocols.

    for example

    from smart_open import open
    import json
    with open("s3://your_bucket/your_key.json", 'r') as f:
        data = json.load(f)
    

    The aws credentials are loaded via boto3 credentials, usually a file in the ~/.aws/ dir or an environment variable.


    回答 6

    您可能会使用以下代码进行写操作,例如在2019年将图像写入S3。要连接到S3,您将必须使用command安装AWS CLI pip install awscli,然后使用command 输入少量凭证aws configure

    import urllib3
    import uuid
    from pathlib import Path
    from io import BytesIO
    from errors import custom_exceptions as cex
    
    BUCKET_NAME = "xxx.yyy.zzz"
    POSTERS_BASE_PATH = "assets/wallcontent"
    CLOUDFRONT_BASE_URL = "https://xxx.cloudfront.net/"
    
    
    class S3(object):
        def __init__(self):
            self.client = boto3.client('s3')
            self.bucket_name = BUCKET_NAME
            self.posters_base_path = POSTERS_BASE_PATH
    
        def __download_image(self, url):
            manager = urllib3.PoolManager()
            try:
                res = manager.request('GET', url)
            except Exception:
                print("Could not download the image from URL: ", url)
                raise cex.ImageDownloadFailed
            return BytesIO(res.data)  # any file-like object that implements read()
    
        def upload_image(self, url):
            try:
                image_file = self.__download_image(url)
            except cex.ImageDownloadFailed:
                raise cex.ImageUploadFailed
    
            extension = Path(url).suffix
            id = uuid.uuid1().hex + extension
            final_path = self.posters_base_path + "/" + id
            try:
                self.client.upload_fileobj(image_file,
                                           self.bucket_name,
                                           final_path
                                           )
            except Exception:
                print("Image Upload Error for URL: ", url)
                raise cex.ImageUploadFailed
    
            return CLOUDFRONT_BASE_URL + id

    You may use the below code to write, for example an image to S3 in 2019. To be able to connect to S3 you will have to install AWS CLI using command pip install awscli, then enter few credentials using command aws configure:

    import urllib3
    import uuid
    from pathlib import Path
    from io import BytesIO
    from errors import custom_exceptions as cex
    
    BUCKET_NAME = "xxx.yyy.zzz"
    POSTERS_BASE_PATH = "assets/wallcontent"
    CLOUDFRONT_BASE_URL = "https://xxx.cloudfront.net/"
    
    
    class S3(object):
        def __init__(self):
            self.client = boto3.client('s3')
            self.bucket_name = BUCKET_NAME
            self.posters_base_path = POSTERS_BASE_PATH
    
        def __download_image(self, url):
            manager = urllib3.PoolManager()
            try:
                res = manager.request('GET', url)
            except Exception:
                print("Could not download the image from URL: ", url)
                raise cex.ImageDownloadFailed
            return BytesIO(res.data)  # any file-like object that implements read()
    
        def upload_image(self, url):
            try:
                image_file = self.__download_image(url)
            except cex.ImageDownloadFailed:
                raise cex.ImageUploadFailed
    
            extension = Path(url).suffix
            id = uuid.uuid1().hex + extension
            final_path = self.posters_base_path + "/" + id
            try:
                self.client.upload_fileobj(image_file,
                                           self.bucket_name,
                                           final_path
                                           )
            except Exception:
                print("Image Upload Error for URL: ", url)
                raise cex.ImageUploadFailed
    
            return CLOUDFRONT_BASE_URL + id
    

    使用Boto3将S3对象作为字符串打开

    问题:使用Boto3将S3对象作为字符串打开

    我知道,使用Boto 2,可以使用以下命令将S3对象作为字符串打开: get_contents_as_string()

    boto3中有等效功能吗?

    I’m aware that with Boto 2 it’s possible to open an S3 object as a string with: get_contents_as_string()

    Is there an equivalent function in boto3 ?


    回答 0

    read将返回字节。至少对于Python 3,如果要返回字符串,则必须使用正确的编码进行解码:

    import boto3
    
    s3 = boto3.resource('s3')
    
    obj = s3.Object(bucket, key)
    obj.get()['Body'].read().decode('utf-8') 

    read will return bytes. At least for Python 3, if you want to return a string, you have to decode using the right encoding:

    import boto3
    
    s3 = boto3.resource('s3')
    
    obj = s3.Object(bucket, key)
    obj.get()['Body'].read().decode('utf-8') 
    

    回答 1

    由于.get()在AWS Lambda 中使用Python 2.7,我无法从S3读取/解析对象。

    我在示例中添加了json以表明它可解析:)

    import boto3
    import json
    
    s3 = boto3.client('s3')
    
    obj = s3.get_object(Bucket=bucket, Key=key)
    j = json.loads(obj['Body'].read())

    注意(对于python 2.7):我的对象都是ascii,所以我不需要 .decode('utf-8')

    注意(对于python 3.6及更高版本):我们移至python 3.6并发现read()现在返回了,bytes因此,如果要从中获取字符串,则必须使用:

    j = json.loads(obj['Body'].read().decode('utf-8'))

    I had a problem to read/parse the object from S3 because of .get() using Python 2.7 inside an AWS Lambda.

    I added json to the example to show it became parsable :)

    import boto3
    import json
    
    s3 = boto3.client('s3')
    
    obj = s3.get_object(Bucket=bucket, Key=key)
    j = json.loads(obj['Body'].read())
    

    NOTE (for python 2.7): My object is all ascii, so I don’t need .decode('utf-8')

    NOTE (for python 3.6+): We moved to python 3.6 and discovered that read() now returns bytes so if you want to get a string out of it, you must use:

    j = json.loads(obj['Body'].read().decode('utf-8'))


    回答 2

    boto3文档中没有此内容。这为我工作:

    object.get()["Body"].read()

    对象是s3对象:http : //boto3.readthedocs.org/en/latest/reference/services/s3.html#object

    This isn’t in the boto3 documentation. This worked for me:

    object.get()["Body"].read()
    

    object being an s3 object: http://boto3.readthedocs.org/en/latest/reference/services/s3.html#object


    回答 3

    Python3 +使用boto3 API方法。

    通过使用S3.Client.download_fileobj API类似Python文件的对象,可以将S3对象的内容检索到内存中。

    由于检索到的内容是字节,因此为了转换为str,需要对其进行解码。

    import io
    import boto3
    
    client = boto3.client('s3')
    bytes_buffer = io.BytesIO()
    client.download_fileobj(Bucket=bucket_name, Key=object_key, Fileobj=bytes_buffer)
    byte_value = bytes_buffer.getvalue()
    str_value = byte_value.decode() #python3, default decoding is utf-8

    Python3 + Using boto3 API approach.

    By using S3.Client.download_fileobj API and Python file-like object, S3 Object content can be retrieved to memory.

    Since the retrieved content is bytes, in order to convert to str, it need to be decoded.

    import io
    import boto3
    
    client = boto3.client('s3')
    bytes_buffer = io.BytesIO()
    client.download_fileobj(Bucket=bucket_name, Key=object_key, Fileobj=bytes_buffer)
    byte_value = bytes_buffer.getvalue()
    str_value = byte_value.decode() #python3, default decoding is utf-8
    

    回答 4

    如果body包含io.StringIO,则必须执行以下操作:

    object.get()['Body'].getvalue()

    If body contains a io.StringIO, you have to do like below:

    object.get()['Body'].getvalue()
    

    AWS boto和boto3有什么区别[关闭]

    问题:AWS boto和boto3有什么区别[关闭]

    我是使用Python的AWS新手,并且正在尝试学习boto API,但是我注意到有两个主要的Python版本/软件包。那将是boto和boto3。

    AWS boto库和boto3库之间有什么区别?

    I’m new to AWS using Python and I’m trying to learn the boto API however I noticed that there are two major versions/packages for Python. That would be boto and boto3.

    What is the difference between the AWS boto and boto3 libraries?


    回答 0

    博托包是手工编写Python库自2006年以来即一直围绕这是非常流行,并通过AWS是完全支持,但因为它是手工编码,有这么多的服务(有更多的出现所有的时间),它很难维护。

    因此,boto3是基于botocore的boto库的新版本。AWS的所有低级接口均由JSON服务描述驱动,而JSON服务描述是根据服务的规范描述自动生成的。因此,界面始终正确且始终是最新的。客户端层之上有一个资源层,它提供了一个更好的,更具Pythonic的界面。

    AWS正在积极开发boto3库,如果人们开始新的开发,我会建议他使用它。

    The boto package is the hand-coded Python library that has been around since 2006. It is very popular and is fully supported by AWS but because it is hand-coded and there are so many services available (with more appearing all the time) it is difficult to maintain.

    So, boto3 is a new version of the boto library based on botocore. All of the low-level interfaces to AWS are driven from JSON service descriptions that are generated automatically from the canonical descriptions of the services. So, the interfaces are always correct and always up to date. There is a resource layer on top of the client-layer that provides a nicer, more Pythonic interface.

    The boto3 library is being actively developed by AWS and is the one I would recommend people use if they are starting new development.


    如何使用boto3处理错误?

    问题:如何使用boto3处理错误?

    我试图弄清楚如何使用boto3进行正确的错误处理。

    我正在尝试创建一个IAM用户:

    def create_user(username, iam_conn):
        try:
            user = iam_conn.create_user(UserName=username)
            return user
        except Exception as e:
            return e

    成功调用create_user时,我得到一个整洁的对象,其中包含API调用的http状态代码和新创建的用户的数据。

    例:

    {'ResponseMetadata': 
          {'HTTPStatusCode': 200, 
           'RequestId': 'omitted'
          },
     u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted',
               u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()),
               u'Path': '/',
               u'UserId': 'omitted',
               u'UserName': 'omitted'
              }
    }

    这很好。但是,如果失败(例如用户已经存在),我只会得到一个类型为botocore.exceptions.ClientError的对象,其中只有文本可以告诉我出了什么问题。

    示例:ClientError(’调用CreateUser操作时发生错误(EntityAlreadyExists):名称省略的用户已经存在。’,)

    此(AFAIK)使得错误处理变得非常困难,因为我不能仅打开结果的http状态代码(根据IAM的AWS API文档,该用户的409已经存在)。这使我认为我必须以错误的方式做某事。最佳的方法是让boto3永远不会抛出异常,但是突出部分总是返回一个反映API调用进行方式的对象。

    谁能在这个问题上启发我或为我指明正确的方向?

    I am trying to figure how to do proper error handling with boto3.

    I am trying to create an IAM user:

    def create_user(username, iam_conn):
        try:
            user = iam_conn.create_user(UserName=username)
            return user
        except Exception as e:
            return e
    

    When the call to create_user succeeds, I get a neat object that contains the http status code of the API call and the data of the newly created user.

    Example:

    {'ResponseMetadata': 
          {'HTTPStatusCode': 200, 
           'RequestId': 'omitted'
          },
     u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted',
               u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()),
               u'Path': '/',
               u'UserId': 'omitted',
               u'UserName': 'omitted'
              }
    }
    

    This works great. But when this fails (like if the user already exists), I just get an object of type botocore.exceptions.ClientError with only text to tell me what went wrong.

    Example: ClientError(‘An error occurred (EntityAlreadyExists) when calling the CreateUser operation: User with name omitted already exists.’,)

    This (AFAIK) makes error handling very hard because I can’t just switch on the resulting http status code (409 for user already exists according to the AWS API docs for IAM). This makes me think that I must be doing something the wrong way. The optimal way would be for boto3 to never throw exceptions, but juts always return an object that reflects how the API call went.

    Can anyone enlighten me on this issue or point me in the right direction?


    回答 0

    使用异常中包含的响应。这是一个例子:

    import boto3
    from botocore.exceptions import ClientError
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except ClientError as e:
        if e.response['Error']['Code'] == 'EntityAlreadyExists':
            print("User already exists")
        else:
            print("Unexpected error: %s" % e)

    异常中的响应字典将包含以下内容:

    • ['Error']['Code'] 例如’EntityAlreadyExists’或’ValidationError’
    • ['ResponseMetadata']['HTTPStatusCode'] 例如400
    • ['ResponseMetadata']['RequestId'] 例如’d2b06652-88d7-11e5-99d0-812348583a35′
    • ['Error']['Message'] 例如:“发生错误(EntityAlreadyExists)…”
    • ['Error']['Type'] 例如“发件人”

    有关更多信息,请参见botocore错误处理

    [更新时间:2018-03-07]

    AWS Python SDK已开始在您可以显式捕获的客户端上公开服务异常(尽管不是在resource上),因此现在可以编写如下代码:

    import boto3
    from botocore.exceptions import ClientError, ParamValidationError
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except iam.exceptions.EntityAlreadyExistsException:
        print("User already exists")
    except ParamValidationError as e:
        print("Parameter validation error: %s" % e)
    except ClientError as e:
        print("Unexpected error: %s" % e)

    不幸的是,目前没有关于这些异常的文档。

    Use the response contained within the exception. Here is an example:

    import boto3
    from botocore.exceptions import ClientError
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except ClientError as e:
        if e.response['Error']['Code'] == 'EntityAlreadyExists':
            print("User already exists")
        else:
            print("Unexpected error: %s" % e)
    

    The response dict in the exception will contain the following:

    • ['Error']['Code'] e.g. ‘EntityAlreadyExists’ or ‘ValidationError’
    • ['ResponseMetadata']['HTTPStatusCode'] e.g. 400
    • ['ResponseMetadata']['RequestId'] e.g. ‘d2b06652-88d7-11e5-99d0-812348583a35’
    • ['Error']['Message'] e.g. “An error occurred (EntityAlreadyExists) …”
    • ['Error']['Type'] e.g. ‘Sender’

    For more information see:

    [Updated: 2018-03-07]

    The AWS Python SDK has begun to expose service exceptions on clients (though not on resources) that you can explicitly catch, so it is now possible to write that code like this:

    import botocore
    import boto3
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except iam.exceptions.EntityAlreadyExistsException:
        print("User already exists")
    except botocore.exceptions.ParamValidationError as e:
        print("Parameter validation error: %s" % e)
    except botocore.exceptions.ClientError as e:
        print("Unexpected error: %s" % e)
    

    Unfortunately, there is currently no documentation for these exceptions but you can get a list of them as follows:

    import botocore
    import boto3
    dir(botocore.exceptions)
    

    Note that you must import both botocore and boto3. If you only import botocore then you will find that botocore has no attribute named exceptions. This is because the exceptions are dynamically populated into botocore by boto3.


    回答 1

    我发现这非常有用,因为没有记录exceptions,因此将所有exceptions列出到此软件包的屏幕上。这是我用来做的代码:

    import botocore.exceptions
    def listexns(mod):
        #module = __import__(mod)
        exns = []
        for name in botocore.exceptions.__dict__:
            if (isinstance(botocore.exceptions.__dict__[name], Exception) or
                name.endswith('Error')):
                exns.append(name)
        for name in exns:
            print('%s.%s is an exception type' % (str(mod), name))
        return
    
    if __name__ == '__main__':
        import sys
        if len(sys.argv) <= 1:
            print('Give me a module name on the $PYTHONPATH!')
        print('Looking for exception types in module: %s' % sys.argv[1])
        listexns(sys.argv[1])

    结果是:

    Looking for exception types in module: boto3
    boto3.BotoCoreError is an exception type
    boto3.DataNotFoundError is an exception type
    boto3.UnknownServiceError is an exception type
    boto3.ApiVersionNotFoundError is an exception type
    boto3.HTTPClientError is an exception type
    boto3.ConnectionError is an exception type
    boto3.EndpointConnectionError is an exception type
    boto3.SSLError is an exception type
    boto3.ConnectionClosedError is an exception type
    boto3.ReadTimeoutError is an exception type
    boto3.ConnectTimeoutError is an exception type
    boto3.ProxyConnectionError is an exception type
    boto3.NoCredentialsError is an exception type
    boto3.PartialCredentialsError is an exception type
    boto3.CredentialRetrievalError is an exception type
    boto3.UnknownSignatureVersionError is an exception type
    boto3.ServiceNotInRegionError is an exception type
    boto3.BaseEndpointResolverError is an exception type
    boto3.NoRegionError is an exception type
    boto3.UnknownEndpointError is an exception type
    boto3.ConfigParseError is an exception type
    boto3.MissingParametersError is an exception type
    boto3.ValidationError is an exception type
    boto3.ParamValidationError is an exception type
    boto3.UnknownKeyError is an exception type
    boto3.RangeError is an exception type
    boto3.UnknownParameterError is an exception type
    boto3.AliasConflictParameterError is an exception type
    boto3.PaginationError is an exception type
    boto3.OperationNotPageableError is an exception type
    boto3.ChecksumError is an exception type
    boto3.UnseekableStreamError is an exception type
    boto3.WaiterError is an exception type
    boto3.IncompleteReadError is an exception type
    boto3.InvalidExpressionError is an exception type
    boto3.UnknownCredentialError is an exception type
    boto3.WaiterConfigError is an exception type
    boto3.UnknownClientMethodError is an exception type
    boto3.UnsupportedSignatureVersionError is an exception type
    boto3.ClientError is an exception type
    boto3.EventStreamError is an exception type
    boto3.InvalidDNSNameError is an exception type
    boto3.InvalidS3AddressingStyleError is an exception type
    boto3.InvalidRetryConfigurationError is an exception type
    boto3.InvalidMaxRetryAttemptsError is an exception type
    boto3.StubResponseError is an exception type
    boto3.StubAssertionError is an exception type
    boto3.UnStubbedResponseError is an exception type
    boto3.InvalidConfigError is an exception type
    boto3.InfiniteLoopConfigError is an exception type
    boto3.RefreshWithMFAUnsupportedError is an exception type
    boto3.MD5UnavailableError is an exception type
    boto3.MetadataRetrievalError is an exception type
    boto3.UndefinedModelAttributeError is an exception type
    boto3.MissingServiceIdError is an exception type

    I found it very useful, since the Exceptions are not documented, to list all exceptions to the screen for this package. Here is the code I used to do it:

    import botocore.exceptions
    def listexns(mod):
        #module = __import__(mod)
        exns = []
        for name in botocore.exceptions.__dict__:
            if (isinstance(botocore.exceptions.__dict__[name], Exception) or
                name.endswith('Error')):
                exns.append(name)
        for name in exns:
            print('%s.%s is an exception type' % (str(mod), name))
        return
    
    if __name__ == '__main__':
        import sys
        if len(sys.argv) <= 1:
            print('Give me a module name on the $PYTHONPATH!')
        print('Looking for exception types in module: %s' % sys.argv[1])
        listexns(sys.argv[1])
    

    Which results in:

    Looking for exception types in module: boto3
    boto3.BotoCoreError is an exception type
    boto3.DataNotFoundError is an exception type
    boto3.UnknownServiceError is an exception type
    boto3.ApiVersionNotFoundError is an exception type
    boto3.HTTPClientError is an exception type
    boto3.ConnectionError is an exception type
    boto3.EndpointConnectionError is an exception type
    boto3.SSLError is an exception type
    boto3.ConnectionClosedError is an exception type
    boto3.ReadTimeoutError is an exception type
    boto3.ConnectTimeoutError is an exception type
    boto3.ProxyConnectionError is an exception type
    boto3.NoCredentialsError is an exception type
    boto3.PartialCredentialsError is an exception type
    boto3.CredentialRetrievalError is an exception type
    boto3.UnknownSignatureVersionError is an exception type
    boto3.ServiceNotInRegionError is an exception type
    boto3.BaseEndpointResolverError is an exception type
    boto3.NoRegionError is an exception type
    boto3.UnknownEndpointError is an exception type
    boto3.ConfigParseError is an exception type
    boto3.MissingParametersError is an exception type
    boto3.ValidationError is an exception type
    boto3.ParamValidationError is an exception type
    boto3.UnknownKeyError is an exception type
    boto3.RangeError is an exception type
    boto3.UnknownParameterError is an exception type
    boto3.AliasConflictParameterError is an exception type
    boto3.PaginationError is an exception type
    boto3.OperationNotPageableError is an exception type
    boto3.ChecksumError is an exception type
    boto3.UnseekableStreamError is an exception type
    boto3.WaiterError is an exception type
    boto3.IncompleteReadError is an exception type
    boto3.InvalidExpressionError is an exception type
    boto3.UnknownCredentialError is an exception type
    boto3.WaiterConfigError is an exception type
    boto3.UnknownClientMethodError is an exception type
    boto3.UnsupportedSignatureVersionError is an exception type
    boto3.ClientError is an exception type
    boto3.EventStreamError is an exception type
    boto3.InvalidDNSNameError is an exception type
    boto3.InvalidS3AddressingStyleError is an exception type
    boto3.InvalidRetryConfigurationError is an exception type
    boto3.InvalidMaxRetryAttemptsError is an exception type
    boto3.StubResponseError is an exception type
    boto3.StubAssertionError is an exception type
    boto3.UnStubbedResponseError is an exception type
    boto3.InvalidConfigError is an exception type
    boto3.InfiniteLoopConfigError is an exception type
    boto3.RefreshWithMFAUnsupportedError is an exception type
    boto3.MD5UnavailableError is an exception type
    boto3.MetadataRetrievalError is an exception type
    boto3.UndefinedModelAttributeError is an exception type
    boto3.MissingServiceIdError is an exception type
    

    回答 2

    只是对@jarmod所指出的“资源上无exceptions”问题的更新(如果以下内容适用,请随时更新您的答案)

    我已经测试了以下代码,并且运行正常。它采用“资源”为的事情,但抓住了client.exceptions-当看着在异常时使用调试器,虽然它看起来“有点错……它测试好,异常类都出现了和匹配…

    它可能不适用于所有资源和客户端,但适用于数据文件夹(也称为s3存储桶)。

    lab_session = boto3.Session() 
    c = lab_session.client('s3') #this client is only for exception catching
    
    try:
        b = s3.Bucket(bucket)
        b.delete()
    except c.exceptions.NoSuchBucket as e:
        #ignoring no such bucket exceptions
        logger.debug("Failed deleting bucket. Continuing. {}".format(e))
    except Exception as e:
        #logging all the others as warning
        logger.warning("Failed deleting bucket. Continuing. {}".format(e))

    希望这可以帮助…

    Just an update to the ‘no exceptions on resources’ problem as pointed to by @jarmod (do please feel free to update your answer if below seems applicable)

    I have tested the below code and it runs fine. It uses ‘resources’ for doing things, but catches the client.exceptions – although it ‘looks’ somewhat wrong… it tests good, the exception classes are showing and matching when looked into using debugger at exception time…

    It may not be applicable to all resources and clients, but works for data folders (aka s3 buckets).

    lab_session = boto3.Session() 
    c = lab_session.client('s3') #this client is only for exception catching
    
    try:
        b = s3.Bucket(bucket)
        b.delete()
    except c.exceptions.NoSuchBucket as e:
        #ignoring no such bucket exceptions
        logger.debug("Failed deleting bucket. Continuing. {}".format(e))
    except Exception as e:
        #logging all the others as warning
        logger.warning("Failed deleting bucket. Continuing. {}".format(e))
    

    Hope this helps…


    回答 3

    如前所述,您可以使用服务客户端(service_client.exceptions.<ExceptionClass>)或资源(service_resource.meta.client.exceptions.<ExceptionClass>)捕获某些错误,但是记录不充分(还有哪些异常属于哪些客户端)。因此,在撰写本文时(2020年1月),将在EU(爱尔兰)(eu-west-1)地区获得完整的映射:

    import boto3, pprint
    
    region_name = 'eu-west-1'
    session = boto3.Session(region_name=region_name)
    exceptions = {
      service: list(boto3.client('sts').exceptions._code_to_exception)
      for service in session.get_available_services()
    }
    pprint.pprint(exceptions, width=20000)

    这是相当大的文档的子集:

    {'acm': ['InvalidArnException', 'InvalidDomainValidationOptionsException', 'InvalidStateException', 'InvalidTagException', 'LimitExceededException', 'RequestInProgressException', 'ResourceInUseException', 'ResourceNotFoundException', 'TooManyTagsException'],
     'apigateway': ['BadRequestException', 'ConflictException', 'LimitExceededException', 'NotFoundException', 'ServiceUnavailableException', 'TooManyRequestsException', 'UnauthorizedException'],
     'athena': ['InternalServerException', 'InvalidRequestException', 'TooManyRequestsException'],
     'autoscaling': ['AlreadyExists', 'InvalidNextToken', 'LimitExceeded', 'ResourceContention', 'ResourceInUse', 'ScalingActivityInProgress', 'ServiceLinkedRoleFailure'],
     'cloudformation': ['AlreadyExistsException', 'ChangeSetNotFound', 'CreatedButModifiedException', 'InsufficientCapabilitiesException', 'InvalidChangeSetStatus', 'InvalidOperationException', 'LimitExceededException', 'NameAlreadyExistsException', 'OperationIdAlreadyExistsException', 'OperationInProgressException', 'OperationNotFoundException', 'StackInstanceNotFoundException', 'StackSetNotEmptyException', 'StackSetNotFoundException', 'StaleRequestException', 'TokenAlreadyExistsException'],
     'cloudfront': ['AccessDenied', 'BatchTooLarge', 'CNAMEAlreadyExists', 'CannotChangeImmutablePublicKeyFields', 'CloudFrontOriginAccessIdentityAlreadyExists', 'CloudFrontOriginAccessIdentityInUse', 'DistributionAlreadyExists', 'DistributionNotDisabled', 'FieldLevelEncryptionConfigAlreadyExists', 'FieldLevelEncryptionConfigInUse', 'FieldLevelEncryptionProfileAlreadyExists', 'FieldLevelEncryptionProfileInUse', 'FieldLevelEncryptionProfileSizeExceeded', 'IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior', 'IllegalUpdate', 'InconsistentQuantities', 'InvalidArgument', 'InvalidDefaultRootObject', 'InvalidErrorCode', 'InvalidForwardCookies', 'InvalidGeoRestrictionParameter', 'InvalidHeadersForS3Origin', 'InvalidIfMatchVersion', 'InvalidLambdaFunctionAssociation', 'InvalidLocationCode', 'InvalidMinimumProtocolVersion', 'InvalidOrigin', 'InvalidOriginAccessIdentity', 'InvalidOriginKeepaliveTimeout', 'InvalidOriginReadTimeout', 'InvalidProtocolSettings', 'InvalidQueryStringParameters', 'InvalidRelativePath', 'InvalidRequiredProtocol', 'InvalidResponseCode', 'InvalidTTLOrder', 'InvalidTagging', 'InvalidViewerCertificate', 'InvalidWebACLId', 'MissingBody', 'NoSuchCloudFrontOriginAccessIdentity', 'NoSuchDistribution', 'NoSuchFieldLevelEncryptionConfig', 'NoSuchFieldLevelEncryptionProfile', 'NoSuchInvalidation', 'NoSuchOrigin', 'NoSuchPublicKey', 'NoSuchResource', 'NoSuchStreamingDistribution', 'PreconditionFailed', 'PublicKeyAlreadyExists', 'PublicKeyInUse', 'QueryArgProfileEmpty', 'StreamingDistributionAlreadyExists', 'StreamingDistributionNotDisabled', 'TooManyCacheBehaviors', 'TooManyCertificates', 'TooManyCloudFrontOriginAccessIdentities', 'TooManyCookieNamesInWhiteList', 'TooManyDistributionCNAMEs', 'TooManyDistributions', 'TooManyDistributionsAssociatedToFieldLevelEncryptionConfig', 'TooManyDistributionsWithLambdaAssociations', 'TooManyFieldLevelEncryptionConfigs', 'TooManyFieldLevelEncryptionContentTypeProfiles', 'TooManyFieldLevelEncryptionEncryptionEntities', 'TooManyFieldLevelEncryptionFieldPatterns', 'TooManyFieldLevelEncryptionProfiles', 'TooManyFieldLevelEncryptionQueryArgProfiles', 'TooManyHeadersInForwardedValues', 'TooManyInvalidationsInProgress', 'TooManyLambdaFunctionAssociations', 'TooManyOriginCustomHeaders', 'TooManyOriginGroupsPerDistribution', 'TooManyOrigins', 'TooManyPublicKeys', 'TooManyQueryStringParameters', 'TooManyStreamingDistributionCNAMEs', 'TooManyStreamingDistributions', 'TooManyTrustedSigners', 'TrustedSignerDoesNotExist'],
     'cloudtrail': ['CloudTrailARNInvalidException', 'CloudTrailAccessNotEnabledException', 'CloudWatchLogsDeliveryUnavailableException', 'InsufficientDependencyServiceAccessPermissionException', 'InsufficientEncryptionPolicyException', 'InsufficientS3BucketPolicyException', 'InsufficientSnsTopicPolicyException', 'InvalidCloudWatchLogsLogGroupArnException', 'InvalidCloudWatchLogsRoleArnException', 'InvalidEventSelectorsException', 'InvalidHomeRegionException', 'InvalidKmsKeyIdException', 'InvalidLookupAttributesException', 'InvalidMaxResultsException', 'InvalidNextTokenException', 'InvalidParameterCombinationException', 'InvalidS3BucketNameException', 'InvalidS3PrefixException', 'InvalidSnsTopicNameException', 'InvalidTagParameterException', 'InvalidTimeRangeException', 'InvalidTokenException', 'InvalidTrailNameException', 'KmsException', 'KmsKeyDisabledException', 'KmsKeyNotFoundException', 'MaximumNumberOfTrailsExceededException', 'NotOrganizationMasterAccountException', 'OperationNotPermittedException', 'OrganizationNotInAllFeaturesModeException', 'OrganizationsNotInUseException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3BucketDoesNotExistException', 'TagsLimitExceededException', 'TrailAlreadyExistsException', 'TrailNotFoundException', 'TrailNotProvidedException', 'UnsupportedOperationException'],
     'cloudwatch': ['InvalidParameterInput', 'ResourceNotFound', 'InternalServiceError', 'InvalidFormat', 'InvalidNextToken', 'InvalidParameterCombination', 'InvalidParameterValue', 'LimitExceeded', 'MissingParameter'],
     'codebuild': ['AccountLimitExceededException', 'InvalidInputException', 'OAuthProviderException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException'],
     'config': ['InsufficientDeliveryPolicyException', 'InsufficientPermissionsException', 'InvalidConfigurationRecorderNameException', 'InvalidDeliveryChannelNameException', 'InvalidLimitException', 'InvalidNextTokenException', 'InvalidParameterValueException', 'InvalidRecordingGroupException', 'InvalidResultTokenException', 'InvalidRoleException', 'InvalidS3KeyPrefixException', 'InvalidSNSTopicARNException', 'InvalidTimeRangeException', 'LastDeliveryChannelDeleteFailedException', 'LimitExceededException', 'MaxNumberOfConfigRulesExceededException', 'MaxNumberOfConfigurationRecordersExceededException', 'MaxNumberOfDeliveryChannelsExceededException', 'MaxNumberOfRetentionConfigurationsExceededException', 'NoAvailableConfigurationRecorderException', 'NoAvailableDeliveryChannelException', 'NoAvailableOrganizationException', 'NoRunningConfigurationRecorderException', 'NoSuchBucketException', 'NoSuchConfigRuleException', 'NoSuchConfigurationAggregatorException', 'NoSuchConfigurationRecorderException', 'NoSuchDeliveryChannelException', 'NoSuchRetentionConfigurationException', 'OrganizationAccessDeniedException', 'OrganizationAllFeaturesNotEnabledException', 'OversizedConfigurationItemException', 'ResourceInUseException', 'ResourceNotDiscoveredException', 'ValidationException'],
     'dynamodb': ['BackupInUseException', 'BackupNotFoundException', 'ConditionalCheckFailedException', 'ContinuousBackupsUnavailableException', 'GlobalTableAlreadyExistsException', 'GlobalTableNotFoundException', 'IdempotentParameterMismatchException', 'IndexNotFoundException', 'InternalServerError', 'InvalidRestoreTimeException', 'ItemCollectionSizeLimitExceededException', 'LimitExceededException', 'PointInTimeRecoveryUnavailableException', 'ProvisionedThroughputExceededException', 'ReplicaAlreadyExistsException', 'ReplicaNotFoundException', 'RequestLimitExceeded', 'ResourceInUseException', 'ResourceNotFoundException', 'TableAlreadyExistsException', 'TableInUseException', 'TableNotFoundException', 'TransactionCanceledException', 'TransactionConflictException', 'TransactionInProgressException'],
     'ec2': [],
     'ecr': ['EmptyUploadException', 'ImageAlreadyExistsException', 'ImageNotFoundException', 'InvalidLayerException', 'InvalidLayerPartException', 'InvalidParameterException', 'InvalidTagParameterException', 'LayerAlreadyExistsException', 'LayerInaccessibleException', 'LayerPartTooSmallException', 'LayersNotFoundException', 'LifecyclePolicyNotFoundException', 'LifecyclePolicyPreviewInProgressException', 'LifecyclePolicyPreviewNotFoundException', 'LimitExceededException', 'RepositoryAlreadyExistsException', 'RepositoryNotEmptyException', 'RepositoryNotFoundException', 'RepositoryPolicyNotFoundException', 'ServerException', 'TooManyTagsException', 'UploadNotFoundException'],
     'ecs': ['AccessDeniedException', 'AttributeLimitExceededException', 'BlockedException', 'ClientException', 'ClusterContainsContainerInstancesException', 'ClusterContainsServicesException', 'ClusterContainsTasksException', 'ClusterNotFoundException', 'InvalidParameterException', 'MissingVersionException', 'NoUpdateAvailableException', 'PlatformTaskDefinitionIncompatibilityException', 'PlatformUnknownException', 'ResourceNotFoundException', 'ServerException', 'ServiceNotActiveException', 'ServiceNotFoundException', 'TargetNotFoundException', 'UnsupportedFeatureException', 'UpdateInProgressException'],
     'efs': ['BadRequest', 'DependencyTimeout', 'FileSystemAlreadyExists', 'FileSystemInUse', 'FileSystemLimitExceeded', 'FileSystemNotFound', 'IncorrectFileSystemLifeCycleState', 'IncorrectMountTargetState', 'InsufficientThroughputCapacity', 'InternalServerError', 'IpAddressInUse', 'MountTargetConflict', 'MountTargetNotFound', 'NetworkInterfaceLimitExceeded', 'NoFreeAddressesInSubnet', 'SecurityGroupLimitExceeded', 'SecurityGroupNotFound', 'SubnetNotFound', 'ThroughputLimitExceeded', 'TooManyRequests', 'UnsupportedAvailabilityZone'],
     'eks': ['ClientException', 'InvalidParameterException', 'InvalidRequestException', 'ResourceInUseException', 'ResourceLimitExceededException', 'ResourceNotFoundException', 'ServerException', 'ServiceUnavailableException', 'UnsupportedAvailabilityZoneException'],
     'elasticache': ['APICallRateForCustomerExceeded', 'AuthorizationAlreadyExists', 'AuthorizationNotFound', 'CacheClusterAlreadyExists', 'CacheClusterNotFound', 'CacheParameterGroupAlreadyExists', 'CacheParameterGroupNotFound', 'CacheParameterGroupQuotaExceeded', 'CacheSecurityGroupAlreadyExists', 'CacheSecurityGroupNotFound', 'QuotaExceeded.CacheSecurityGroup', 'CacheSubnetGroupAlreadyExists', 'CacheSubnetGroupInUse', 'CacheSubnetGroupNotFoundFault', 'CacheSubnetGroupQuotaExceeded', 'CacheSubnetQuotaExceededFault', 'ClusterQuotaForCustomerExceeded', 'InsufficientCacheClusterCapacity', 'InvalidARN', 'InvalidCacheClusterState', 'InvalidCacheParameterGroupState', 'InvalidCacheSecurityGroupState', 'InvalidParameterCombination', 'InvalidParameterValue', 'InvalidReplicationGroupState', 'InvalidSnapshotState', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'NoOperationFault', 'NodeGroupNotFoundFault', 'NodeGroupsPerReplicationGroupQuotaExceeded', 'NodeQuotaForClusterExceeded', 'NodeQuotaForCustomerExceeded', 'ReplicationGroupAlreadyExists', 'ReplicationGroupNotFoundFault', 'ReservedCacheNodeAlreadyExists', 'ReservedCacheNodeNotFound', 'ReservedCacheNodeQuotaExceeded', 'ReservedCacheNodesOfferingNotFound', 'ServiceLinkedRoleNotFoundFault', 'SnapshotAlreadyExistsFault', 'SnapshotFeatureNotSupportedFault', 'SnapshotNotFoundFault', 'SnapshotQuotaExceededFault', 'SubnetInUse', 'TagNotFound', 'TagQuotaPerResourceExceeded', 'TestFailoverNotAvailableFault'],
     'elasticbeanstalk': ['CodeBuildNotInServiceRegionException', 'ElasticBeanstalkServiceException', 'InsufficientPrivilegesException', 'InvalidRequestException', 'ManagedActionInvalidStateException', 'OperationInProgressFailure', 'PlatformVersionStillReferencedException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3LocationNotInServiceRegionException', 'S3SubscriptionRequiredException', 'SourceBundleDeletionFailure', 'TooManyApplicationVersionsException', 'TooManyApplicationsException', 'TooManyBucketsException', 'TooManyConfigurationTemplatesException', 'TooManyEnvironmentsException', 'TooManyPlatformsException', 'TooManyTagsException'],
     'elb': ['LoadBalancerNotFound', 'CertificateNotFound', 'DependencyThrottle', 'DuplicateLoadBalancerName', 'DuplicateListener', 'DuplicatePolicyName', 'DuplicateTagKeys', 'InvalidConfigurationRequest', 'InvalidInstance', 'InvalidScheme', 'InvalidSecurityGroup', 'InvalidSubnet', 'ListenerNotFound', 'LoadBalancerAttributeNotFound', 'OperationNotPermitted', 'PolicyNotFound', 'PolicyTypeNotFound', 'SubnetNotFound', 'TooManyLoadBalancers', 'TooManyPolicies', 'TooManyTags', 'UnsupportedProtocol'],
     'emr': ['InternalServerError', 'InternalServerException', 'InvalidRequestException'],
     'es': ['BaseException', 'DisabledOperationException', 'InternalException', 'InvalidTypeException', 'LimitExceededException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ValidationException'],
     'events': ['ConcurrentModificationException', 'InternalException', 'InvalidEventPatternException', 'LimitExceededException', 'ManagedRuleException', 'PolicyLengthExceededException', 'ResourceNotFoundException'],
     'firehose': ['ConcurrentModificationException', 'InvalidArgumentException', 'LimitExceededException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glacier': ['InsufficientCapacityException', 'InvalidParameterValueException', 'LimitExceededException', 'MissingParameterValueException', 'PolicyEnforcedException', 'RequestTimeoutException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glue': ['AccessDeniedException', 'AlreadyExistsException', 'ConcurrentModificationException', 'ConcurrentRunsExceededException', 'ConditionCheckFailureException', 'CrawlerNotRunningException', 'CrawlerRunningException', 'CrawlerStoppingException', 'EntityNotFoundException', 'GlueEncryptionException', 'IdempotentParameterMismatchException', 'InternalServiceException', 'InvalidInputException', 'NoScheduleException', 'OperationTimeoutException', 'ResourceNumberLimitExceededException', 'SchedulerNotRunningException', 'SchedulerRunningException', 'SchedulerTransitioningException', 'ValidationException', 'VersionMismatchException'],
     'iam': ['ConcurrentModification', 'ReportExpired', 'ReportNotPresent', 'ReportInProgress', 'DeleteConflict', 'DuplicateCertificate', 'DuplicateSSHPublicKey', 'EntityAlreadyExists', 'EntityTemporarilyUnmodifiable', 'InvalidAuthenticationCode', 'InvalidCertificate', 'InvalidInput', 'InvalidPublicKey', 'InvalidUserType', 'KeyPairMismatch', 'LimitExceeded', 'MalformedCertificate', 'MalformedPolicyDocument', 'NoSuchEntity', 'PasswordPolicyViolation', 'PolicyEvaluation', 'PolicyNotAttachable', 'ServiceFailure', 'NotSupportedService', 'UnmodifiableEntity', 'UnrecognizedPublicKeyEncoding'],
     'kinesis': ['ExpiredIteratorException', 'ExpiredNextTokenException', 'InternalFailureException', 'InvalidArgumentException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'KMSOptInRequired', 'KMSThrottlingException', 'LimitExceededException', 'ProvisionedThroughputExceededException', 'ResourceInUseException', 'ResourceNotFoundException'],
     'kms': ['AlreadyExistsException', 'CloudHsmClusterInUseException', 'CloudHsmClusterInvalidConfigurationException', 'CloudHsmClusterNotActiveException', 'CloudHsmClusterNotFoundException', 'CloudHsmClusterNotRelatedException', 'CustomKeyStoreHasCMKsException', 'CustomKeyStoreInvalidStateException', 'CustomKeyStoreNameInUseException', 'CustomKeyStoreNotFoundException', 'DependencyTimeoutException', 'DisabledException', 'ExpiredImportTokenException', 'IncorrectKeyMaterialException', 'IncorrectTrustAnchorException', 'InvalidAliasNameException', 'InvalidArnException', 'InvalidCiphertextException', 'InvalidGrantIdException', 'InvalidGrantTokenException', 'InvalidImportTokenException', 'InvalidKeyUsageException', 'InvalidMarkerException', 'KMSInternalException', 'KMSInvalidStateException', 'KeyUnavailableException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'NotFoundException', 'TagException', 'UnsupportedOperationException'],
     'lambda': ['CodeStorageExceededException', 'EC2AccessDeniedException', 'EC2ThrottledException', 'EC2UnexpectedException', 'ENILimitReachedException', 'InvalidParameterValueException', 'InvalidRequestContentException', 'InvalidRuntimeException', 'InvalidSecurityGroupIDException', 'InvalidSubnetIDException', 'InvalidZipFileException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'PolicyLengthExceededException', 'PreconditionFailedException', 'RequestTooLargeException', 'ResourceConflictException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceException', 'SubnetIPAddressLimitReachedException', 'TooManyRequestsException', 'UnsupportedMediaTypeException'],
     'logs': ['DataAlreadyAcceptedException', 'InvalidOperationException', 'InvalidParameterException', 'InvalidSequenceTokenException', 'LimitExceededException', 'MalformedQueryException', 'OperationAbortedException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ServiceUnavailableException', 'UnrecognizedClientException'],
     'neptune': ['AuthorizationNotFound', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceNotFound', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupNotFound', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidRestoreFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupNotFoundFault', 'ProvisionedIopsNotAvailableInAZFault', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'rds': ['AuthorizationAlreadyExists', 'AuthorizationNotFound', 'AuthorizationQuotaExceeded', 'BackupPolicyNotFoundFault', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterBacktrackNotFoundFault', 'DBClusterEndpointAlreadyExistsFault', 'DBClusterEndpointNotFoundFault', 'DBClusterEndpointQuotaExceededFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceAutomatedBackupNotFound', 'DBInstanceAutomatedBackupQuotaExceeded', 'DBInstanceNotFound', 'DBInstanceRoleAlreadyExists', 'DBInstanceRoleNotFound', 'DBInstanceRoleQuotaExceeded', 'DBLogFileNotFoundFault', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupAlreadyExists', 'DBSecurityGroupNotFound', 'DBSecurityGroupNotSupported', 'QuotaExceeded.DBSecurityGroup', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotAllowedFault', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'GlobalClusterAlreadyExistsFault', 'GlobalClusterNotFoundFault', 'GlobalClusterQuotaExceededFault', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterCapacityFault', 'InvalidDBClusterEndpointStateFault', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceAutomatedBackupState', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupFault', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidGlobalClusterStateFault', 'InvalidOptionGroupStateFault', 'InvalidRestoreFault', 'InvalidS3BucketFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupAlreadyExistsFault', 'OptionGroupNotFoundFault', 'OptionGroupQuotaExceededFault', 'PointInTimeRestoreNotEnabled', 'ProvisionedIopsNotAvailableInAZFault', 'ReservedDBInstanceAlreadyExists', 'ReservedDBInstanceNotFound', 'ReservedDBInstanceQuotaExceeded', 'ReservedDBInstancesOfferingNotFound', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'route53': ['ConcurrentModification', 'ConflictingDomainExists', 'ConflictingTypes', 'DelegationSetAlreadyCreated', 'DelegationSetAlreadyReusable', 'DelegationSetInUse', 'DelegationSetNotAvailable', 'DelegationSetNotReusable', 'HealthCheckAlreadyExists', 'HealthCheckInUse', 'HealthCheckVersionMismatch', 'HostedZoneAlreadyExists', 'HostedZoneNotEmpty', 'HostedZoneNotFound', 'HostedZoneNotPrivate', 'IncompatibleVersion', 'InsufficientCloudWatchLogsResourcePolicy', 'InvalidArgument', 'InvalidChangeBatch', 'InvalidDomainName', 'InvalidInput', 'InvalidPaginationToken', 'InvalidTrafficPolicyDocument', 'InvalidVPCId', 'LastVPCAssociation', 'LimitsExceeded', 'NoSuchChange', 'NoSuchCloudWatchLogsLogGroup', 'NoSuchDelegationSet', 'NoSuchGeoLocation', 'NoSuchHealthCheck', 'NoSuchHostedZone', 'NoSuchQueryLoggingConfig', 'NoSuchTrafficPolicy', 'NoSuchTrafficPolicyInstance', 'NotAuthorizedException', 'PriorRequestNotComplete', 'PublicZoneVPCAssociation', 'QueryLoggingConfigAlreadyExists', 'ThrottlingException', 'TooManyHealthChecks', 'TooManyHostedZones', 'TooManyTrafficPolicies', 'TooManyTrafficPolicyInstances', 'TooManyTrafficPolicyVersionsForCurrentPolicy', 'TooManyVPCAssociationAuthorizations', 'TrafficPolicyAlreadyExists', 'TrafficPolicyInUse', 'TrafficPolicyInstanceAlreadyExists', 'VPCAssociationAuthorizationNotFound', 'VPCAssociationNotFound'],
     's3': ['BucketAlreadyExists', 'BucketAlreadyOwnedByYou', 'NoSuchBucket', 'NoSuchKey', 'NoSuchUpload', 'ObjectAlreadyInActiveTierError', 'ObjectNotInActiveTierError'],
     'sagemaker': ['ResourceInUse', 'ResourceLimitExceeded', 'ResourceNotFound'],
     'secretsmanager': ['DecryptionFailure', 'EncryptionFailure', 'InternalServiceError', 'InvalidNextTokenException', 'InvalidParameterException', 'InvalidRequestException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'PreconditionNotMetException', 'ResourceExistsException', 'ResourceNotFoundException'],
     'ses': ['AccountSendingPausedException', 'AlreadyExists', 'CannotDelete', 'ConfigurationSetAlreadyExists', 'ConfigurationSetDoesNotExist', 'ConfigurationSetSendingPausedException', 'CustomVerificationEmailInvalidContent', 'CustomVerificationEmailTemplateAlreadyExists', 'CustomVerificationEmailTemplateDoesNotExist', 'EventDestinationAlreadyExists', 'EventDestinationDoesNotExist', 'FromEmailAddressNotVerified', 'InvalidCloudWatchDestination', 'InvalidConfigurationSet', 'InvalidFirehoseDestination', 'InvalidLambdaFunction', 'InvalidPolicy', 'InvalidRenderingParameter', 'InvalidS3Configuration', 'InvalidSNSDestination', 'InvalidSnsTopic', 'InvalidTemplate', 'InvalidTrackingOptions', 'LimitExceeded', 'MailFromDomainNotVerifiedException', 'MessageRejected', 'MissingRenderingAttribute', 'ProductionAccessNotGranted', 'RuleDoesNotExist', 'RuleSetDoesNotExist', 'TemplateDoesNotExist', 'TrackingOptionsAlreadyExistsException', 'TrackingOptionsDoesNotExistException'],
     'sns': ['AuthorizationError', 'EndpointDisabled', 'FilterPolicyLimitExceeded', 'InternalError', 'InvalidParameter', 'ParameterValueInvalid', 'InvalidSecurity', 'KMSAccessDenied', 'KMSDisabled', 'KMSInvalidState', 'KMSNotFound', 'KMSOptInRequired', 'KMSThrottling', 'NotFound', 'PlatformApplicationDisabled', 'SubscriptionLimitExceeded', 'Throttled', 'TopicLimitExceeded'],
     'sqs': ['AWS.SimpleQueueService.BatchEntryIdsNotDistinct', 'AWS.SimpleQueueService.BatchRequestTooLong', 'AWS.SimpleQueueService.EmptyBatchRequest', 'InvalidAttributeName', 'AWS.SimpleQueueService.InvalidBatchEntryId', 'InvalidIdFormat', 'InvalidMessageContents', 'AWS.SimpleQueueService.MessageNotInflight', 'OverLimit', 'AWS.SimpleQueueService.PurgeQueueInProgress', 'AWS.SimpleQueueService.QueueDeletedRecently', 'AWS.SimpleQueueService.NonExistentQueue', 'QueueAlreadyExists', 'ReceiptHandleIsInvalid', 'AWS.SimpleQueueService.TooManyEntriesInBatchRequest', 'AWS.SimpleQueueService.UnsupportedOperation'],
     'ssm': ['AlreadyExistsException', 'AssociatedInstances', 'AssociationAlreadyExists', 'AssociationDoesNotExist', 'AssociationExecutionDoesNotExist', 'AssociationLimitExceeded', 'AssociationVersionLimitExceeded', 'AutomationDefinitionNotFoundException', 'AutomationDefinitionVersionNotFoundException', 'AutomationExecutionLimitExceededException', 'AutomationExecutionNotFoundException', 'AutomationStepNotFoundException', 'ComplianceTypeCountLimitExceededException', 'CustomSchemaCountLimitExceededException', 'DocumentAlreadyExists', 'DocumentLimitExceeded', 'DocumentPermissionLimit', 'DocumentVersionLimitExceeded', 'DoesNotExistException', 'DuplicateDocumentContent', 'DuplicateDocumentVersionName', 'DuplicateInstanceId', 'FeatureNotAvailableException', 'HierarchyLevelLimitExceededException', 'HierarchyTypeMismatchException', 'IdempotentParameterMismatch', 'InternalServerError', 'InvalidActivation', 'InvalidActivationId', 'InvalidAggregatorException', 'InvalidAllowedPatternException', 'InvalidAssociation', 'InvalidAssociationVersion', 'InvalidAutomationExecutionParametersException', 'InvalidAutomationSignalException', 'InvalidAutomationStatusUpdateException', 'InvalidCommandId', 'InvalidDeleteInventoryParametersException', 'InvalidDeletionIdException', 'InvalidDocument', 'InvalidDocumentContent', 'InvalidDocumentOperation', 'InvalidDocumentSchemaVersion', 'InvalidDocumentVersion', 'InvalidFilter', 'InvalidFilterKey', 'InvalidFilterOption', 'InvalidFilterValue', 'InvalidInstanceId', 'InvalidInstanceInformationFilterValue', 'InvalidInventoryGroupException', 'InvalidInventoryItemContextException', 'InvalidInventoryRequestException', 'InvalidItemContentException', 'InvalidKeyId', 'InvalidNextToken', 'InvalidNotificationConfig', 'InvalidOptionException', 'InvalidOutputFolder', 'InvalidOutputLocation', 'InvalidParameters', 'InvalidPermissionType', 'InvalidPluginName', 'InvalidResourceId', 'InvalidResourceType', 'InvalidResultAttributeException', 'InvalidRole', 'InvalidSchedule', 'InvalidTarget', 'InvalidTypeNameException', 'InvalidUpdate', 'InvocationDoesNotExist', 'ItemContentMismatchException', 'ItemSizeLimitExceededException', 'MaxDocumentSizeExceeded', 'ParameterAlreadyExists', 'ParameterLimitExceeded', 'ParameterMaxVersionLimitExceeded', 'ParameterNotFound', 'ParameterPatternMismatchException', 'ParameterVersionLabelLimitExceeded', 'ParameterVersionNotFound', 'ResourceDataSyncAlreadyExistsException', 'ResourceDataSyncCountExceededException', 'ResourceDataSyncInvalidConfigurationException', 'ResourceDataSyncNotFoundException', 'ResourceInUseException', 'ResourceLimitExceededException', 'StatusUnchanged', 'SubTypeCountLimitExceededException', 'TargetInUseException', 'TargetNotConnected', 'TooManyTagsError', 'TooManyUpdates', 'TotalSizeLimitExceededException', 'UnsupportedInventoryItemContextException', 'UnsupportedInventorySchemaVersionException', 'UnsupportedOperatingSystem', 'UnsupportedParameterType', 'UnsupportedPlatformType'],
     'stepfunctions': ['ActivityDoesNotExist', 'ActivityLimitExceeded', 'ActivityWorkerLimitExceeded', 'ExecutionAlreadyExists', 'ExecutionDoesNotExist', 'ExecutionLimitExceeded', 'InvalidArn', 'InvalidDefinition', 'InvalidExecutionInput', 'InvalidName', 'InvalidOutput', 'InvalidToken', 'MissingRequiredParameter', 'ResourceNotFound', 'StateMachineAlreadyExists', 'StateMachineDeleting', 'StateMachineDoesNotExist', 'StateMachineLimitExceeded', 'TaskDoesNotExist', 'TaskTimedOut', 'TooManyTags'],
     'sts': ['ExpiredTokenException', 'IDPCommunicationError', 'IDPRejectedClaim', 'InvalidAuthorizationMessageException', 'InvalidIdentityToken', 'MalformedPolicyDocument', 'PackedPolicyTooLarge', 'RegionDisabledException'],
     'xray': ['InvalidRequestException', 'RuleLimitExceededException', 'ThrottledException']}

    As a few others already mentioned, you can catch certain errors using the service client (service_client.exceptions.<ExceptionClass>) or resource (service_resource.meta.client.exceptions.<ExceptionClass>), however it is not well documented (also which exceptions belong to which clients). So here is how to get the complete mapping at time of writing (January 2020) in region EU (Ireland) (eu-west-1):

    import boto3, pprint
    
    region_name = 'eu-west-1'
    session = boto3.Session(region_name=region_name)
    exceptions = {
      service: list(boto3.client('sts').exceptions._code_to_exception)
      for service in session.get_available_services()
    }
    pprint.pprint(exceptions, width=20000)
    

    Here is a subset of the pretty large document:

    {'acm': ['InvalidArnException', 'InvalidDomainValidationOptionsException', 'InvalidStateException', 'InvalidTagException', 'LimitExceededException', 'RequestInProgressException', 'ResourceInUseException', 'ResourceNotFoundException', 'TooManyTagsException'],
     'apigateway': ['BadRequestException', 'ConflictException', 'LimitExceededException', 'NotFoundException', 'ServiceUnavailableException', 'TooManyRequestsException', 'UnauthorizedException'],
     'athena': ['InternalServerException', 'InvalidRequestException', 'TooManyRequestsException'],
     'autoscaling': ['AlreadyExists', 'InvalidNextToken', 'LimitExceeded', 'ResourceContention', 'ResourceInUse', 'ScalingActivityInProgress', 'ServiceLinkedRoleFailure'],
     'cloudformation': ['AlreadyExistsException', 'ChangeSetNotFound', 'CreatedButModifiedException', 'InsufficientCapabilitiesException', 'InvalidChangeSetStatus', 'InvalidOperationException', 'LimitExceededException', 'NameAlreadyExistsException', 'OperationIdAlreadyExistsException', 'OperationInProgressException', 'OperationNotFoundException', 'StackInstanceNotFoundException', 'StackSetNotEmptyException', 'StackSetNotFoundException', 'StaleRequestException', 'TokenAlreadyExistsException'],
     'cloudfront': ['AccessDenied', 'BatchTooLarge', 'CNAMEAlreadyExists', 'CannotChangeImmutablePublicKeyFields', 'CloudFrontOriginAccessIdentityAlreadyExists', 'CloudFrontOriginAccessIdentityInUse', 'DistributionAlreadyExists', 'DistributionNotDisabled', 'FieldLevelEncryptionConfigAlreadyExists', 'FieldLevelEncryptionConfigInUse', 'FieldLevelEncryptionProfileAlreadyExists', 'FieldLevelEncryptionProfileInUse', 'FieldLevelEncryptionProfileSizeExceeded', 'IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior', 'IllegalUpdate', 'InconsistentQuantities', 'InvalidArgument', 'InvalidDefaultRootObject', 'InvalidErrorCode', 'InvalidForwardCookies', 'InvalidGeoRestrictionParameter', 'InvalidHeadersForS3Origin', 'InvalidIfMatchVersion', 'InvalidLambdaFunctionAssociation', 'InvalidLocationCode', 'InvalidMinimumProtocolVersion', 'InvalidOrigin', 'InvalidOriginAccessIdentity', 'InvalidOriginKeepaliveTimeout', 'InvalidOriginReadTimeout', 'InvalidProtocolSettings', 'InvalidQueryStringParameters', 'InvalidRelativePath', 'InvalidRequiredProtocol', 'InvalidResponseCode', 'InvalidTTLOrder', 'InvalidTagging', 'InvalidViewerCertificate', 'InvalidWebACLId', 'MissingBody', 'NoSuchCloudFrontOriginAccessIdentity', 'NoSuchDistribution', 'NoSuchFieldLevelEncryptionConfig', 'NoSuchFieldLevelEncryptionProfile', 'NoSuchInvalidation', 'NoSuchOrigin', 'NoSuchPublicKey', 'NoSuchResource', 'NoSuchStreamingDistribution', 'PreconditionFailed', 'PublicKeyAlreadyExists', 'PublicKeyInUse', 'QueryArgProfileEmpty', 'StreamingDistributionAlreadyExists', 'StreamingDistributionNotDisabled', 'TooManyCacheBehaviors', 'TooManyCertificates', 'TooManyCloudFrontOriginAccessIdentities', 'TooManyCookieNamesInWhiteList', 'TooManyDistributionCNAMEs', 'TooManyDistributions', 'TooManyDistributionsAssociatedToFieldLevelEncryptionConfig', 'TooManyDistributionsWithLambdaAssociations', 'TooManyFieldLevelEncryptionConfigs', 'TooManyFieldLevelEncryptionContentTypeProfiles', 'TooManyFieldLevelEncryptionEncryptionEntities', 'TooManyFieldLevelEncryptionFieldPatterns', 'TooManyFieldLevelEncryptionProfiles', 'TooManyFieldLevelEncryptionQueryArgProfiles', 'TooManyHeadersInForwardedValues', 'TooManyInvalidationsInProgress', 'TooManyLambdaFunctionAssociations', 'TooManyOriginCustomHeaders', 'TooManyOriginGroupsPerDistribution', 'TooManyOrigins', 'TooManyPublicKeys', 'TooManyQueryStringParameters', 'TooManyStreamingDistributionCNAMEs', 'TooManyStreamingDistributions', 'TooManyTrustedSigners', 'TrustedSignerDoesNotExist'],
     'cloudtrail': ['CloudTrailARNInvalidException', 'CloudTrailAccessNotEnabledException', 'CloudWatchLogsDeliveryUnavailableException', 'InsufficientDependencyServiceAccessPermissionException', 'InsufficientEncryptionPolicyException', 'InsufficientS3BucketPolicyException', 'InsufficientSnsTopicPolicyException', 'InvalidCloudWatchLogsLogGroupArnException', 'InvalidCloudWatchLogsRoleArnException', 'InvalidEventSelectorsException', 'InvalidHomeRegionException', 'InvalidKmsKeyIdException', 'InvalidLookupAttributesException', 'InvalidMaxResultsException', 'InvalidNextTokenException', 'InvalidParameterCombinationException', 'InvalidS3BucketNameException', 'InvalidS3PrefixException', 'InvalidSnsTopicNameException', 'InvalidTagParameterException', 'InvalidTimeRangeException', 'InvalidTokenException', 'InvalidTrailNameException', 'KmsException', 'KmsKeyDisabledException', 'KmsKeyNotFoundException', 'MaximumNumberOfTrailsExceededException', 'NotOrganizationMasterAccountException', 'OperationNotPermittedException', 'OrganizationNotInAllFeaturesModeException', 'OrganizationsNotInUseException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3BucketDoesNotExistException', 'TagsLimitExceededException', 'TrailAlreadyExistsException', 'TrailNotFoundException', 'TrailNotProvidedException', 'UnsupportedOperationException'],
     'cloudwatch': ['InvalidParameterInput', 'ResourceNotFound', 'InternalServiceError', 'InvalidFormat', 'InvalidNextToken', 'InvalidParameterCombination', 'InvalidParameterValue', 'LimitExceeded', 'MissingParameter'],
     'codebuild': ['AccountLimitExceededException', 'InvalidInputException', 'OAuthProviderException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException'],
     'config': ['InsufficientDeliveryPolicyException', 'InsufficientPermissionsException', 'InvalidConfigurationRecorderNameException', 'InvalidDeliveryChannelNameException', 'InvalidLimitException', 'InvalidNextTokenException', 'InvalidParameterValueException', 'InvalidRecordingGroupException', 'InvalidResultTokenException', 'InvalidRoleException', 'InvalidS3KeyPrefixException', 'InvalidSNSTopicARNException', 'InvalidTimeRangeException', 'LastDeliveryChannelDeleteFailedException', 'LimitExceededException', 'MaxNumberOfConfigRulesExceededException', 'MaxNumberOfConfigurationRecordersExceededException', 'MaxNumberOfDeliveryChannelsExceededException', 'MaxNumberOfRetentionConfigurationsExceededException', 'NoAvailableConfigurationRecorderException', 'NoAvailableDeliveryChannelException', 'NoAvailableOrganizationException', 'NoRunningConfigurationRecorderException', 'NoSuchBucketException', 'NoSuchConfigRuleException', 'NoSuchConfigurationAggregatorException', 'NoSuchConfigurationRecorderException', 'NoSuchDeliveryChannelException', 'NoSuchRetentionConfigurationException', 'OrganizationAccessDeniedException', 'OrganizationAllFeaturesNotEnabledException', 'OversizedConfigurationItemException', 'ResourceInUseException', 'ResourceNotDiscoveredException', 'ValidationException'],
     'dynamodb': ['BackupInUseException', 'BackupNotFoundException', 'ConditionalCheckFailedException', 'ContinuousBackupsUnavailableException', 'GlobalTableAlreadyExistsException', 'GlobalTableNotFoundException', 'IdempotentParameterMismatchException', 'IndexNotFoundException', 'InternalServerError', 'InvalidRestoreTimeException', 'ItemCollectionSizeLimitExceededException', 'LimitExceededException', 'PointInTimeRecoveryUnavailableException', 'ProvisionedThroughputExceededException', 'ReplicaAlreadyExistsException', 'ReplicaNotFoundException', 'RequestLimitExceeded', 'ResourceInUseException', 'ResourceNotFoundException', 'TableAlreadyExistsException', 'TableInUseException', 'TableNotFoundException', 'TransactionCanceledException', 'TransactionConflictException', 'TransactionInProgressException'],
     'ec2': [],
     'ecr': ['EmptyUploadException', 'ImageAlreadyExistsException', 'ImageNotFoundException', 'InvalidLayerException', 'InvalidLayerPartException', 'InvalidParameterException', 'InvalidTagParameterException', 'LayerAlreadyExistsException', 'LayerInaccessibleException', 'LayerPartTooSmallException', 'LayersNotFoundException', 'LifecyclePolicyNotFoundException', 'LifecyclePolicyPreviewInProgressException', 'LifecyclePolicyPreviewNotFoundException', 'LimitExceededException', 'RepositoryAlreadyExistsException', 'RepositoryNotEmptyException', 'RepositoryNotFoundException', 'RepositoryPolicyNotFoundException', 'ServerException', 'TooManyTagsException', 'UploadNotFoundException'],
     'ecs': ['AccessDeniedException', 'AttributeLimitExceededException', 'BlockedException', 'ClientException', 'ClusterContainsContainerInstancesException', 'ClusterContainsServicesException', 'ClusterContainsTasksException', 'ClusterNotFoundException', 'InvalidParameterException', 'MissingVersionException', 'NoUpdateAvailableException', 'PlatformTaskDefinitionIncompatibilityException', 'PlatformUnknownException', 'ResourceNotFoundException', 'ServerException', 'ServiceNotActiveException', 'ServiceNotFoundException', 'TargetNotFoundException', 'UnsupportedFeatureException', 'UpdateInProgressException'],
     'efs': ['BadRequest', 'DependencyTimeout', 'FileSystemAlreadyExists', 'FileSystemInUse', 'FileSystemLimitExceeded', 'FileSystemNotFound', 'IncorrectFileSystemLifeCycleState', 'IncorrectMountTargetState', 'InsufficientThroughputCapacity', 'InternalServerError', 'IpAddressInUse', 'MountTargetConflict', 'MountTargetNotFound', 'NetworkInterfaceLimitExceeded', 'NoFreeAddressesInSubnet', 'SecurityGroupLimitExceeded', 'SecurityGroupNotFound', 'SubnetNotFound', 'ThroughputLimitExceeded', 'TooManyRequests', 'UnsupportedAvailabilityZone'],
     'eks': ['ClientException', 'InvalidParameterException', 'InvalidRequestException', 'ResourceInUseException', 'ResourceLimitExceededException', 'ResourceNotFoundException', 'ServerException', 'ServiceUnavailableException', 'UnsupportedAvailabilityZoneException'],
     'elasticache': ['APICallRateForCustomerExceeded', 'AuthorizationAlreadyExists', 'AuthorizationNotFound', 'CacheClusterAlreadyExists', 'CacheClusterNotFound', 'CacheParameterGroupAlreadyExists', 'CacheParameterGroupNotFound', 'CacheParameterGroupQuotaExceeded', 'CacheSecurityGroupAlreadyExists', 'CacheSecurityGroupNotFound', 'QuotaExceeded.CacheSecurityGroup', 'CacheSubnetGroupAlreadyExists', 'CacheSubnetGroupInUse', 'CacheSubnetGroupNotFoundFault', 'CacheSubnetGroupQuotaExceeded', 'CacheSubnetQuotaExceededFault', 'ClusterQuotaForCustomerExceeded', 'InsufficientCacheClusterCapacity', 'InvalidARN', 'InvalidCacheClusterState', 'InvalidCacheParameterGroupState', 'InvalidCacheSecurityGroupState', 'InvalidParameterCombination', 'InvalidParameterValue', 'InvalidReplicationGroupState', 'InvalidSnapshotState', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'NoOperationFault', 'NodeGroupNotFoundFault', 'NodeGroupsPerReplicationGroupQuotaExceeded', 'NodeQuotaForClusterExceeded', 'NodeQuotaForCustomerExceeded', 'ReplicationGroupAlreadyExists', 'ReplicationGroupNotFoundFault', 'ReservedCacheNodeAlreadyExists', 'ReservedCacheNodeNotFound', 'ReservedCacheNodeQuotaExceeded', 'ReservedCacheNodesOfferingNotFound', 'ServiceLinkedRoleNotFoundFault', 'SnapshotAlreadyExistsFault', 'SnapshotFeatureNotSupportedFault', 'SnapshotNotFoundFault', 'SnapshotQuotaExceededFault', 'SubnetInUse', 'TagNotFound', 'TagQuotaPerResourceExceeded', 'TestFailoverNotAvailableFault'],
     'elasticbeanstalk': ['CodeBuildNotInServiceRegionException', 'ElasticBeanstalkServiceException', 'InsufficientPrivilegesException', 'InvalidRequestException', 'ManagedActionInvalidStateException', 'OperationInProgressFailure', 'PlatformVersionStillReferencedException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3LocationNotInServiceRegionException', 'S3SubscriptionRequiredException', 'SourceBundleDeletionFailure', 'TooManyApplicationVersionsException', 'TooManyApplicationsException', 'TooManyBucketsException', 'TooManyConfigurationTemplatesException', 'TooManyEnvironmentsException', 'TooManyPlatformsException', 'TooManyTagsException'],
     'elb': ['LoadBalancerNotFound', 'CertificateNotFound', 'DependencyThrottle', 'DuplicateLoadBalancerName', 'DuplicateListener', 'DuplicatePolicyName', 'DuplicateTagKeys', 'InvalidConfigurationRequest', 'InvalidInstance', 'InvalidScheme', 'InvalidSecurityGroup', 'InvalidSubnet', 'ListenerNotFound', 'LoadBalancerAttributeNotFound', 'OperationNotPermitted', 'PolicyNotFound', 'PolicyTypeNotFound', 'SubnetNotFound', 'TooManyLoadBalancers', 'TooManyPolicies', 'TooManyTags', 'UnsupportedProtocol'],
     'emr': ['InternalServerError', 'InternalServerException', 'InvalidRequestException'],
     'es': ['BaseException', 'DisabledOperationException', 'InternalException', 'InvalidTypeException', 'LimitExceededException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ValidationException'],
     'events': ['ConcurrentModificationException', 'InternalException', 'InvalidEventPatternException', 'LimitExceededException', 'ManagedRuleException', 'PolicyLengthExceededException', 'ResourceNotFoundException'],
     'firehose': ['ConcurrentModificationException', 'InvalidArgumentException', 'LimitExceededException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glacier': ['InsufficientCapacityException', 'InvalidParameterValueException', 'LimitExceededException', 'MissingParameterValueException', 'PolicyEnforcedException', 'RequestTimeoutException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glue': ['AccessDeniedException', 'AlreadyExistsException', 'ConcurrentModificationException', 'ConcurrentRunsExceededException', 'ConditionCheckFailureException', 'CrawlerNotRunningException', 'CrawlerRunningException', 'CrawlerStoppingException', 'EntityNotFoundException', 'GlueEncryptionException', 'IdempotentParameterMismatchException', 'InternalServiceException', 'InvalidInputException', 'NoScheduleException', 'OperationTimeoutException', 'ResourceNumberLimitExceededException', 'SchedulerNotRunningException', 'SchedulerRunningException', 'SchedulerTransitioningException', 'ValidationException', 'VersionMismatchException'],
     'iam': ['ConcurrentModification', 'ReportExpired', 'ReportNotPresent', 'ReportInProgress', 'DeleteConflict', 'DuplicateCertificate', 'DuplicateSSHPublicKey', 'EntityAlreadyExists', 'EntityTemporarilyUnmodifiable', 'InvalidAuthenticationCode', 'InvalidCertificate', 'InvalidInput', 'InvalidPublicKey', 'InvalidUserType', 'KeyPairMismatch', 'LimitExceeded', 'MalformedCertificate', 'MalformedPolicyDocument', 'NoSuchEntity', 'PasswordPolicyViolation', 'PolicyEvaluation', 'PolicyNotAttachable', 'ServiceFailure', 'NotSupportedService', 'UnmodifiableEntity', 'UnrecognizedPublicKeyEncoding'],
     'kinesis': ['ExpiredIteratorException', 'ExpiredNextTokenException', 'InternalFailureException', 'InvalidArgumentException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'KMSOptInRequired', 'KMSThrottlingException', 'LimitExceededException', 'ProvisionedThroughputExceededException', 'ResourceInUseException', 'ResourceNotFoundException'],
     'kms': ['AlreadyExistsException', 'CloudHsmClusterInUseException', 'CloudHsmClusterInvalidConfigurationException', 'CloudHsmClusterNotActiveException', 'CloudHsmClusterNotFoundException', 'CloudHsmClusterNotRelatedException', 'CustomKeyStoreHasCMKsException', 'CustomKeyStoreInvalidStateException', 'CustomKeyStoreNameInUseException', 'CustomKeyStoreNotFoundException', 'DependencyTimeoutException', 'DisabledException', 'ExpiredImportTokenException', 'IncorrectKeyMaterialException', 'IncorrectTrustAnchorException', 'InvalidAliasNameException', 'InvalidArnException', 'InvalidCiphertextException', 'InvalidGrantIdException', 'InvalidGrantTokenException', 'InvalidImportTokenException', 'InvalidKeyUsageException', 'InvalidMarkerException', 'KMSInternalException', 'KMSInvalidStateException', 'KeyUnavailableException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'NotFoundException', 'TagException', 'UnsupportedOperationException'],
     'lambda': ['CodeStorageExceededException', 'EC2AccessDeniedException', 'EC2ThrottledException', 'EC2UnexpectedException', 'ENILimitReachedException', 'InvalidParameterValueException', 'InvalidRequestContentException', 'InvalidRuntimeException', 'InvalidSecurityGroupIDException', 'InvalidSubnetIDException', 'InvalidZipFileException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'PolicyLengthExceededException', 'PreconditionFailedException', 'RequestTooLargeException', 'ResourceConflictException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceException', 'SubnetIPAddressLimitReachedException', 'TooManyRequestsException', 'UnsupportedMediaTypeException'],
     'logs': ['DataAlreadyAcceptedException', 'InvalidOperationException', 'InvalidParameterException', 'InvalidSequenceTokenException', 'LimitExceededException', 'MalformedQueryException', 'OperationAbortedException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ServiceUnavailableException', 'UnrecognizedClientException'],
     'neptune': ['AuthorizationNotFound', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceNotFound', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupNotFound', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidRestoreFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupNotFoundFault', 'ProvisionedIopsNotAvailableInAZFault', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'rds': ['AuthorizationAlreadyExists', 'AuthorizationNotFound', 'AuthorizationQuotaExceeded', 'BackupPolicyNotFoundFault', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterBacktrackNotFoundFault', 'DBClusterEndpointAlreadyExistsFault', 'DBClusterEndpointNotFoundFault', 'DBClusterEndpointQuotaExceededFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceAutomatedBackupNotFound', 'DBInstanceAutomatedBackupQuotaExceeded', 'DBInstanceNotFound', 'DBInstanceRoleAlreadyExists', 'DBInstanceRoleNotFound', 'DBInstanceRoleQuotaExceeded', 'DBLogFileNotFoundFault', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupAlreadyExists', 'DBSecurityGroupNotFound', 'DBSecurityGroupNotSupported', 'QuotaExceeded.DBSecurityGroup', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotAllowedFault', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'GlobalClusterAlreadyExistsFault', 'GlobalClusterNotFoundFault', 'GlobalClusterQuotaExceededFault', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterCapacityFault', 'InvalidDBClusterEndpointStateFault', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceAutomatedBackupState', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupFault', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidGlobalClusterStateFault', 'InvalidOptionGroupStateFault', 'InvalidRestoreFault', 'InvalidS3BucketFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupAlreadyExistsFault', 'OptionGroupNotFoundFault', 'OptionGroupQuotaExceededFault', 'PointInTimeRestoreNotEnabled', 'ProvisionedIopsNotAvailableInAZFault', 'ReservedDBInstanceAlreadyExists', 'ReservedDBInstanceNotFound', 'ReservedDBInstanceQuotaExceeded', 'ReservedDBInstancesOfferingNotFound', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'route53': ['ConcurrentModification', 'ConflictingDomainExists', 'ConflictingTypes', 'DelegationSetAlreadyCreated', 'DelegationSetAlreadyReusable', 'DelegationSetInUse', 'DelegationSetNotAvailable', 'DelegationSetNotReusable', 'HealthCheckAlreadyExists', 'HealthCheckInUse', 'HealthCheckVersionMismatch', 'HostedZoneAlreadyExists', 'HostedZoneNotEmpty', 'HostedZoneNotFound', 'HostedZoneNotPrivate', 'IncompatibleVersion', 'InsufficientCloudWatchLogsResourcePolicy', 'InvalidArgument', 'InvalidChangeBatch', 'InvalidDomainName', 'InvalidInput', 'InvalidPaginationToken', 'InvalidTrafficPolicyDocument', 'InvalidVPCId', 'LastVPCAssociation', 'LimitsExceeded', 'NoSuchChange', 'NoSuchCloudWatchLogsLogGroup', 'NoSuchDelegationSet', 'NoSuchGeoLocation', 'NoSuchHealthCheck', 'NoSuchHostedZone', 'NoSuchQueryLoggingConfig', 'NoSuchTrafficPolicy', 'NoSuchTrafficPolicyInstance', 'NotAuthorizedException', 'PriorRequestNotComplete', 'PublicZoneVPCAssociation', 'QueryLoggingConfigAlreadyExists', 'ThrottlingException', 'TooManyHealthChecks', 'TooManyHostedZones', 'TooManyTrafficPolicies', 'TooManyTrafficPolicyInstances', 'TooManyTrafficPolicyVersionsForCurrentPolicy', 'TooManyVPCAssociationAuthorizations', 'TrafficPolicyAlreadyExists', 'TrafficPolicyInUse', 'TrafficPolicyInstanceAlreadyExists', 'VPCAssociationAuthorizationNotFound', 'VPCAssociationNotFound'],
     's3': ['BucketAlreadyExists', 'BucketAlreadyOwnedByYou', 'NoSuchBucket', 'NoSuchKey', 'NoSuchUpload', 'ObjectAlreadyInActiveTierError', 'ObjectNotInActiveTierError'],
     'sagemaker': ['ResourceInUse', 'ResourceLimitExceeded', 'ResourceNotFound'],
     'secretsmanager': ['DecryptionFailure', 'EncryptionFailure', 'InternalServiceError', 'InvalidNextTokenException', 'InvalidParameterException', 'InvalidRequestException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'PreconditionNotMetException', 'ResourceExistsException', 'ResourceNotFoundException'],
     'ses': ['AccountSendingPausedException', 'AlreadyExists', 'CannotDelete', 'ConfigurationSetAlreadyExists', 'ConfigurationSetDoesNotExist', 'ConfigurationSetSendingPausedException', 'CustomVerificationEmailInvalidContent', 'CustomVerificationEmailTemplateAlreadyExists', 'CustomVerificationEmailTemplateDoesNotExist', 'EventDestinationAlreadyExists', 'EventDestinationDoesNotExist', 'FromEmailAddressNotVerified', 'InvalidCloudWatchDestination', 'InvalidConfigurationSet', 'InvalidFirehoseDestination', 'InvalidLambdaFunction', 'InvalidPolicy', 'InvalidRenderingParameter', 'InvalidS3Configuration', 'InvalidSNSDestination', 'InvalidSnsTopic', 'InvalidTemplate', 'InvalidTrackingOptions', 'LimitExceeded', 'MailFromDomainNotVerifiedException', 'MessageRejected', 'MissingRenderingAttribute', 'ProductionAccessNotGranted', 'RuleDoesNotExist', 'RuleSetDoesNotExist', 'TemplateDoesNotExist', 'TrackingOptionsAlreadyExistsException', 'TrackingOptionsDoesNotExistException'],
     'sns': ['AuthorizationError', 'EndpointDisabled', 'FilterPolicyLimitExceeded', 'InternalError', 'InvalidParameter', 'ParameterValueInvalid', 'InvalidSecurity', 'KMSAccessDenied', 'KMSDisabled', 'KMSInvalidState', 'KMSNotFound', 'KMSOptInRequired', 'KMSThrottling', 'NotFound', 'PlatformApplicationDisabled', 'SubscriptionLimitExceeded', 'Throttled', 'TopicLimitExceeded'],
     'sqs': ['AWS.SimpleQueueService.BatchEntryIdsNotDistinct', 'AWS.SimpleQueueService.BatchRequestTooLong', 'AWS.SimpleQueueService.EmptyBatchRequest', 'InvalidAttributeName', 'AWS.SimpleQueueService.InvalidBatchEntryId', 'InvalidIdFormat', 'InvalidMessageContents', 'AWS.SimpleQueueService.MessageNotInflight', 'OverLimit', 'AWS.SimpleQueueService.PurgeQueueInProgress', 'AWS.SimpleQueueService.QueueDeletedRecently', 'AWS.SimpleQueueService.NonExistentQueue', 'QueueAlreadyExists', 'ReceiptHandleIsInvalid', 'AWS.SimpleQueueService.TooManyEntriesInBatchRequest', 'AWS.SimpleQueueService.UnsupportedOperation'],
     'ssm': ['AlreadyExistsException', 'AssociatedInstances', 'AssociationAlreadyExists', 'AssociationDoesNotExist', 'AssociationExecutionDoesNotExist', 'AssociationLimitExceeded', 'AssociationVersionLimitExceeded', 'AutomationDefinitionNotFoundException', 'AutomationDefinitionVersionNotFoundException', 'AutomationExecutionLimitExceededException', 'AutomationExecutionNotFoundException', 'AutomationStepNotFoundException', 'ComplianceTypeCountLimitExceededException', 'CustomSchemaCountLimitExceededException', 'DocumentAlreadyExists', 'DocumentLimitExceeded', 'DocumentPermissionLimit', 'DocumentVersionLimitExceeded', 'DoesNotExistException', 'DuplicateDocumentContent', 'DuplicateDocumentVersionName', 'DuplicateInstanceId', 'FeatureNotAvailableException', 'HierarchyLevelLimitExceededException', 'HierarchyTypeMismatchException', 'IdempotentParameterMismatch', 'InternalServerError', 'InvalidActivation', 'InvalidActivationId', 'InvalidAggregatorException', 'InvalidAllowedPatternException', 'InvalidAssociation', 'InvalidAssociationVersion', 'InvalidAutomationExecutionParametersException', 'InvalidAutomationSignalException', 'InvalidAutomationStatusUpdateException', 'InvalidCommandId', 'InvalidDeleteInventoryParametersException', 'InvalidDeletionIdException', 'InvalidDocument', 'InvalidDocumentContent', 'InvalidDocumentOperation', 'InvalidDocumentSchemaVersion', 'InvalidDocumentVersion', 'InvalidFilter', 'InvalidFilterKey', 'InvalidFilterOption', 'InvalidFilterValue', 'InvalidInstanceId', 'InvalidInstanceInformationFilterValue', 'InvalidInventoryGroupException', 'InvalidInventoryItemContextException', 'InvalidInventoryRequestException', 'InvalidItemContentException', 'InvalidKeyId', 'InvalidNextToken', 'InvalidNotificationConfig', 'InvalidOptionException', 'InvalidOutputFolder', 'InvalidOutputLocation', 'InvalidParameters', 'InvalidPermissionType', 'InvalidPluginName', 'InvalidResourceId', 'InvalidResourceType', 'InvalidResultAttributeException', 'InvalidRole', 'InvalidSchedule', 'InvalidTarget', 'InvalidTypeNameException', 'InvalidUpdate', 'InvocationDoesNotExist', 'ItemContentMismatchException', 'ItemSizeLimitExceededException', 'MaxDocumentSizeExceeded', 'ParameterAlreadyExists', 'ParameterLimitExceeded', 'ParameterMaxVersionLimitExceeded', 'ParameterNotFound', 'ParameterPatternMismatchException', 'ParameterVersionLabelLimitExceeded', 'ParameterVersionNotFound', 'ResourceDataSyncAlreadyExistsException', 'ResourceDataSyncCountExceededException', 'ResourceDataSyncInvalidConfigurationException', 'ResourceDataSyncNotFoundException', 'ResourceInUseException', 'ResourceLimitExceededException', 'StatusUnchanged', 'SubTypeCountLimitExceededException', 'TargetInUseException', 'TargetNotConnected', 'TooManyTagsError', 'TooManyUpdates', 'TotalSizeLimitExceededException', 'UnsupportedInventoryItemContextException', 'UnsupportedInventorySchemaVersionException', 'UnsupportedOperatingSystem', 'UnsupportedParameterType', 'UnsupportedPlatformType'],
     'stepfunctions': ['ActivityDoesNotExist', 'ActivityLimitExceeded', 'ActivityWorkerLimitExceeded', 'ExecutionAlreadyExists', 'ExecutionDoesNotExist', 'ExecutionLimitExceeded', 'InvalidArn', 'InvalidDefinition', 'InvalidExecutionInput', 'InvalidName', 'InvalidOutput', 'InvalidToken', 'MissingRequiredParameter', 'ResourceNotFound', 'StateMachineAlreadyExists', 'StateMachineDeleting', 'StateMachineDoesNotExist', 'StateMachineLimitExceeded', 'TaskDoesNotExist', 'TaskTimedOut', 'TooManyTags'],
     'sts': ['ExpiredTokenException', 'IDPCommunicationError', 'IDPRejectedClaim', 'InvalidAuthorizationMessageException', 'InvalidIdentityToken', 'MalformedPolicyDocument', 'PackedPolicyTooLarge', 'RegionDisabledException'],
     'xray': ['InvalidRequestException', 'RuleLimitExceededException', 'ThrottledException']}
    

    回答 4

    或类名称的比较,例如

    except ClientError as e:
        if 'EntityAlreadyExistsException' == e.__class__.__name__:
            # handle specific error

    因为它们是动态创建的,所以您永远无法导入该类并使用真正的Python捕获它。

    Or a comparison on the class name e.g.

    except ClientError as e:
        if 'EntityAlreadyExistsException' == e.__class__.__name__:
            # handle specific error
    

    Because they are dynamically created you can never import the class and catch it using real Python.


    回答 5

    如果要使用Python3调用sign_up API(AWS Cognito),则可以使用以下代码。

    def registerUser(userObj):
        ''' Registers the user to AWS Cognito.
        '''
    
        # Mobile number is not a mandatory field. 
        if(len(userObj['user_mob_no']) == 0):
            mobilenumber = ''
        else:
            mobilenumber = userObj['user_country_code']+userObj['user_mob_no']
    
        secretKey = bytes(settings.SOCIAL_AUTH_COGNITO_SECRET, 'latin-1')
        clientId = settings.SOCIAL_AUTH_COGNITO_KEY 
    
        digest = hmac.new(secretKey,
                    msg=(userObj['user_name'] + clientId).encode('utf-8'),
                    digestmod=hashlib.sha256
                    ).digest()
        signature = base64.b64encode(digest).decode()
    
        client = boto3.client('cognito-idp', region_name='eu-west-1' ) 
    
        try:
            response = client.sign_up(
                        ClientId=clientId,
                        Username=userObj['user_name'],
                        Password=userObj['password1'],
                        SecretHash=signature,
                        UserAttributes=[
                            {
                                'Name': 'given_name',
                                'Value': userObj['given_name']
                            },
                            {
                                'Name': 'family_name',
                                'Value': userObj['family_name']
                            },
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                            {
                                'Name': 'phone_number',
                                'Value': mobilenumber
                            }
                        ],
                        ValidationData=[
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                        ]
                        ,
                        AnalyticsMetadata={
                            'AnalyticsEndpointId': 'string'
                        },
                        UserContextData={
                            'EncodedData': 'string'
                        }
                    )
        except ClientError as error:
            return {"errorcode": error.response['Error']['Code'],
                "errormessage" : error.response['Error']['Message'] }
        except Exception as e:
            return {"errorcode": "Something went wrong. Try later or contact the admin" }
        return {"success": "User registered successfully. "}

    error.response [‘Error’] [‘Code’]将是InvalidPasswordException,UsernameExistsException等。因此,在主函数或调用函数的地方,您可以编写逻辑以向用户提供有意义的消息。

    响应示例(error.response):

    {
      "Error": {
        "Message": "Password did not conform with policy: Password must have symbol characters",
        "Code": "InvalidPasswordException"
      },
      "ResponseMetadata": {
        "RequestId": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
        "HTTPStatusCode": 400,
        "HTTPHeaders": {
          "date": "Wed, 17 Jul 2019 09:38:32 GMT",
          "content-type": "application/x-amz-json-1.1",
          "content-length": "124",
          "connection": "keep-alive",
          "x-amzn-requestid": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
          "x-amzn-errortype": "InvalidPasswordException:",
          "x-amzn-errormessage": "Password did not conform with policy: Password must have symbol characters"
        },
        "RetryAttempts": 0
      }
    }

    有关更多参考:https : //boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html#CognitoIdentityProvider.Client.sign_up

    If you are calling the sign_up API (AWS Cognito) using Python3, you can use the following code.

    def registerUser(userObj):
        ''' Registers the user to AWS Cognito.
        '''
    
        # Mobile number is not a mandatory field. 
        if(len(userObj['user_mob_no']) == 0):
            mobilenumber = ''
        else:
            mobilenumber = userObj['user_country_code']+userObj['user_mob_no']
    
        secretKey = bytes(settings.SOCIAL_AUTH_COGNITO_SECRET, 'latin-1')
        clientId = settings.SOCIAL_AUTH_COGNITO_KEY 
    
        digest = hmac.new(secretKey,
                    msg=(userObj['user_name'] + clientId).encode('utf-8'),
                    digestmod=hashlib.sha256
                    ).digest()
        signature = base64.b64encode(digest).decode()
    
        client = boto3.client('cognito-idp', region_name='eu-west-1' ) 
    
        try:
            response = client.sign_up(
                        ClientId=clientId,
                        Username=userObj['user_name'],
                        Password=userObj['password1'],
                        SecretHash=signature,
                        UserAttributes=[
                            {
                                'Name': 'given_name',
                                'Value': userObj['given_name']
                            },
                            {
                                'Name': 'family_name',
                                'Value': userObj['family_name']
                            },
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                            {
                                'Name': 'phone_number',
                                'Value': mobilenumber
                            }
                        ],
                        ValidationData=[
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                        ]
                        ,
                        AnalyticsMetadata={
                            'AnalyticsEndpointId': 'string'
                        },
                        UserContextData={
                            'EncodedData': 'string'
                        }
                    )
        except ClientError as error:
            return {"errorcode": error.response['Error']['Code'],
                "errormessage" : error.response['Error']['Message'] }
        except Exception as e:
            return {"errorcode": "Something went wrong. Try later or contact the admin" }
        return {"success": "User registered successfully. "}
    

    error.response[‘Error’][‘Code’] will be InvalidPasswordException, UsernameExistsException etc. So in the main function or where you are calling the function, you can write the logic to provide a meaningful message to the user.

    An example for the response (error.response):

    {
      "Error": {
        "Message": "Password did not conform with policy: Password must have symbol characters",
        "Code": "InvalidPasswordException"
      },
      "ResponseMetadata": {
        "RequestId": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
        "HTTPStatusCode": 400,
        "HTTPHeaders": {
          "date": "Wed, 17 Jul 2019 09:38:32 GMT",
          "content-type": "application/x-amz-json-1.1",
          "content-length": "124",
          "connection": "keep-alive",
          "x-amzn-requestid": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
          "x-amzn-errortype": "InvalidPasswordException:",
          "x-amzn-errormessage": "Password did not conform with policy: Password must have symbol characters"
        },
        "RetryAttempts": 0
      }
    }
    

    For further reference : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html#CognitoIdentityProvider.Client.sign_up


    回答 6

    万一您不得不处理可能是不友好的logs客户端(CloudWatch Logsput-log-events),这是我必须做的,以正确捕获Boto3客户端异常:

    try:
        ### Boto3 client code here...
    
    except boto_exceptions.ClientError as error:
        Log.warning("Catched client error code %s",
                    error.response['Error']['Code'])
    
        if error.response['Error']['Code'] in ["DataAlreadyAcceptedException",
                                               "InvalidSequenceTokenException"]:
            Log.debug(
                "Fetching sequence_token from boto error response['Error']['Message'] %s",
                error.response["Error"]["Message"])
            # NOTE: apparently there's no sequenceToken attribute in the response so we have
            # to parse response["Error"]["Message"] string
            sequence_token = error.response["Error"]["Message"].split(":")[-1].strip(" ")
            Log.debug("Setting sequence_token to %s", sequence_token)

    这在第一次尝试(使用空LogStream时)和后续尝试中都有效。

    In case you have to deal with the arguably unfriendly logs client (CloudWatch Logs put-log-events), this is what I had to do to properly catch Boto3 client exceptions:

    try:
        ### Boto3 client code here...
    
    except boto_exceptions.ClientError as error:
        Log.warning("Catched client error code %s",
                    error.response['Error']['Code'])
    
        if error.response['Error']['Code'] in ["DataAlreadyAcceptedException",
                                               "InvalidSequenceTokenException"]:
            Log.debug(
                "Fetching sequence_token from boto error response['Error']['Message'] %s",
                error.response["Error"]["Message"])
            # NOTE: apparently there's no sequenceToken attribute in the response so we have
            # to parse response["Error"]["Message"] string
            sequence_token = error.response["Error"]["Message"].split(":")[-1].strip(" ")
            Log.debug("Setting sequence_token to %s", sequence_token)
    

    This works both at first attempt (with empty LogStream) and subsequent ones.


    回答 7

    @armod更新有关在client对象上添加异常的信息之后。我将展示如何查看为客户端类定义的所有异常。

    使用session.create_client()或创建客户端时,会动态生成异常boto3.client()。内部调用方法,botocore.errorfactory.ClientExceptionsFactory._create_client_exceptions()client.exceptions用构造的异常类填充字段。

    所有类名都可以在client.exceptions._code_to_exception字典中找到,因此您可以使用以下代码段列出所有类型:

    client = boto3.client('s3')
    
    for ex_code in client.exceptions._code_to_exception:
        print(ex_code)

    希望能帮助到你。

    Following @armod’s update about exceptions being added right on client objects. I’ll show how you can see all exceptions defined for your client class.

    Exceptions are generated dynamically when you create your client with session.create_client() or boto3.client(). Internally it calls method botocore.errorfactory.ClientExceptionsFactory._create_client_exceptions() and fills client.exceptions field with constructed exception classes.

    All class names are available in client.exceptions._code_to_exception dictionary, so you can list all types with following snippet:

    client = boto3.client('s3')
    
    for ex_code in client.exceptions._code_to_exception:
        print(ex_code)
    

    Hope it helps.


    回答 8

    无法解决问题时,您需要采取一些措施。现在,您正在返回实际的异常。例如,如果这不是用户已经存在的问题,并且您想将其用作get_or_create函数,则可以通过返回现有的用户对象来解决该问题。

    try:
        user = iam_conn.create_user(UserName=username)
        return user
    except botocore.exceptions.ClientError as e:
    
        #this exception could actually be other things other than exists, so you want to evaluate it further in your real code.
        if e.message.startswith(
            'enough of the exception message to identify it as the one you want')
    
            print('that user already exists.')
            user = iam_conn.get_user(UserName=username)
            return user
    
        elif e.message.some_other_condition:
    
             #something else
        else:
             #unhandled ClientError
             raise(e)
    except SomeOtherExceptionTypeYouCareAbout as e:
        #handle it
    
    # any unhandled exception will raise here at this point.
    # if you want a general handler
    
    except Exception as e:
        #handle it.

    就是说,这可能对您的应用程序来说是个问题,在这种情况下,您想将异常处理程序放在调用创建用户函数的代码周围,并让调用函数确定如何处理它,例如,通过询问用户输入另一个用户名,或对您的应用有意义的任何名称。

    You need to do something when it fails to handle the issue. Right now you are returning the actual exception. For example, if its not a problem that the user exists already and you want to use it as a get_or_create function maybe you handle the issue by returning the existing user object.

    try:
        user = iam_conn.create_user(UserName=username)
        return user
    except botocore.exceptions.ClientError as e:
    
        #this exception could actually be other things other than exists, so you want to evaluate it further in your real code.
        if e.message.startswith(
            'enough of the exception message to identify it as the one you want')
    
            print('that user already exists.')
            user = iam_conn.get_user(UserName=username)
            return user
    
        elif e.message.some_other_condition:
    
             #something else
        else:
             #unhandled ClientError
             raise(e)
    except SomeOtherExceptionTypeYouCareAbout as e:
        #handle it
    
    # any unhandled exception will raise here at this point.
    # if you want a general handler
    
    except Exception as e:
        #handle it.
    

    That said, maybe it is a problem for your app, in which case you want to want to put the exception handler around the code that called your create user function and let the calling function determine how to deal with it, for example, by asking the user to input another username, or whatever makes sense for your application.


    列出带有boto3的存储桶的内容

    问题:列出带有boto3的存储桶的内容

    如何查看S3中的存储桶中的内容boto3?(即是"ls")?

    执行以下操作:

    import boto3
    s3 = boto3.resource('s3')
    my_bucket = s3.Bucket('some/path/')

    返回:

    s3.Bucket(name='some/path/')

    我如何看其内容?

    How can I see what’s inside a bucket in S3 with boto3? (i.e. do an "ls")?

    Doing the following:

    import boto3
    s3 = boto3.resource('s3')
    my_bucket = s3.Bucket('some/path/')
    

    returns:

    s3.Bucket(name='some/path/')
    

    How do I see its contents?


    回答 0

    查看内容的一种方法是:

    for my_bucket_object in my_bucket.objects.all():
        print(my_bucket_object)

    One way to see the contents would be:

    for my_bucket_object in my_bucket.objects.all():
        print(my_bucket_object)
    

    回答 1

    这类似于“ ls”,但是它没有考虑前缀文件夹约定,并且会列出存储桶中的对象。它留给阅读器以过滤掉作为键名称一部分的前缀。

    在Python 2中:

    from boto.s3.connection import S3Connection
    
    conn = S3Connection() # assumes boto.cfg setup
    bucket = conn.get_bucket('bucket_name')
    for obj in bucket.get_all_keys():
        print(obj.key)

    在Python 3中:

    from boto3 import client
    
    conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
    for key in conn.list_objects(Bucket='bucket_name')['Contents']:
        print(key['Key'])

    This is similar to an ‘ls’ but it does not take into account the prefix folder convention and will list the objects in the bucket. It’s left up to the reader to filter out prefixes which are part of the Key name.

    In Python 2:

    from boto.s3.connection import S3Connection
    
    conn = S3Connection() # assumes boto.cfg setup
    bucket = conn.get_bucket('bucket_name')
    for obj in bucket.get_all_keys():
        print(obj.key)
    

    In Python 3:

    from boto3 import client
    
    conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
    for key in conn.list_objects(Bucket='bucket_name')['Contents']:
        print(key['Key'])
    

    回答 2

    我假设您已经分别配置了身份验证。

    import boto3
    s3 = boto3.resource('s3')
    
    my_bucket = s3.Bucket('bucket_name')
    
    for file in my_bucket.objects.all():
        print(file.key)

    I’m assuming you have configured authentication separately.

    import boto3
    s3 = boto3.resource('s3')
    
    my_bucket = s3.Bucket('bucket_name')
    
    for file in my_bucket.objects.all():
        print(file.key)
    

    回答 3

    如果要传递ACCESS和SECRET密钥(由于不安全,则不应该这样做):

    from boto3.session import Session
    
    ACCESS_KEY='your_access_key'
    SECRET_KEY='your_secret_key'
    
    session = Session(aws_access_key_id=ACCESS_KEY,
                      aws_secret_access_key=SECRET_KEY)
    s3 = session.resource('s3')
    your_bucket = s3.Bucket('your_bucket')
    
    for s3_file in your_bucket.objects.all():
        print(s3_file.key)

    If you want to pass the ACCESS and SECRET keys (which you should not do, because it is not secure):

    from boto3.session import Session
    
    ACCESS_KEY='your_access_key'
    SECRET_KEY='your_secret_key'
    
    session = Session(aws_access_key_id=ACCESS_KEY,
                      aws_secret_access_key=SECRET_KEY)
    s3 = session.resource('s3')
    your_bucket = s3.Bucket('your_bucket')
    
    for s3_file in your_bucket.objects.all():
        print(s3_file.key)
    

    回答 4

    为了处理大型键列表(即,当目录列表大于1000个项目时),我使用以下代码来累加具有多个列表的键值(即文件名)(这要归功于上述第一行的Amelio)。代码适用于python3:

        from boto3  import client
        bucket_name = "my_bucket"
        prefix      = "my_key/sub_key/lots_o_files"
    
        s3_conn   = client('s3')  # type: BaseClient  ## again assumes boto.cfg setup, assume AWS S3
        s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter = "/")
    
        if 'Contents' not in s3_result:
            #print(s3_result)
            return []
    
        file_list = []
        for key in s3_result['Contents']:
            file_list.append(key['Key'])
        print(f"List count = {len(file_list)}")
    
        while s3_result['IsTruncated']:
            continuation_key = s3_result['NextContinuationToken']
            s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter="/", ContinuationToken=continuation_key)
            for key in s3_result['Contents']:
                file_list.append(key['Key'])
            print(f"List count = {len(file_list)}")
        return file_list

    In order to handle large key listings (i.e. when the directory list is greater than 1000 items), I used the following code to accumulate key values (i.e. filenames) with multiple listings (thanks to Amelio above for the first lines). Code is for python3:

        from boto3  import client
        bucket_name = "my_bucket"
        prefix      = "my_key/sub_key/lots_o_files"
    
        s3_conn   = client('s3')  # type: BaseClient  ## again assumes boto.cfg setup, assume AWS S3
        s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter = "/")
    
        if 'Contents' not in s3_result:
            #print(s3_result)
            return []
    
        file_list = []
        for key in s3_result['Contents']:
            file_list.append(key['Key'])
        print(f"List count = {len(file_list)}")
    
        while s3_result['IsTruncated']:
            continuation_key = s3_result['NextContinuationToken']
            s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter="/", ContinuationToken=continuation_key)
            for key in s3_result['Contents']:
                file_list.append(key['Key'])
            print(f"List count = {len(file_list)}")
        return file_list
    

    回答 5

    我的s3 keys实用程序函数本质上是@Hephaestus答案的优化版本:

    import boto3
    
    
    s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')
    
    
    def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
        prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
        start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
        for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
            for content in page.get('Contents', ()):
                yield content['Key']

    在我的测试(boto3 1.9.84)中,它比等效(但更简单)的代码快得多:

    import boto3
    
    
    def keys(bucket_name, prefix='/', delimiter='/'):
        prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
        bucket = boto3.resource('s3').Bucket(bucket_name)
        return (_.key for _ in bucket.objects.filter(Prefix=prefix))

    由于S3保证UTF-8二进制排序结果start_after因此对第一个函数进行了优化。

    My s3 keys utility function is essentially an optimized version of @Hephaestus’s answer:

    import boto3
    
    
    s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')
    
    
    def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
        prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
        start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
        for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
            for content in page.get('Contents', ()):
                yield content['Key']
    

    In my tests (boto3 1.9.84), it’s significantly faster than the equivalent (but simpler) code:

    import boto3
    
    
    def keys(bucket_name, prefix='/', delimiter='/'):
        prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
        bucket = boto3.resource('s3').Bucket(bucket_name)
        return (_.key for _ in bucket.objects.filter(Prefix=prefix))
    

    As S3 guarantees UTF-8 binary sorted results, a start_after optimization has been added to the first function.


    回答 6

    一种更简化的方法,而不是通过for循环进行遍历,您还可以仅打印包含S3存储桶中所有文件的原始对象:

    session = Session(aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
    s3 = session.resource('s3')
    bucket = s3.Bucket('bucket_name')
    
    files_in_s3 = bucket.objects.all() 
    #you can print this iterable with print(list(files_in_s3))

    A more parsimonious way, rather than iterating through via a for loop you could also just print the original object containing all files inside your S3 bucket:

    session = Session(aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
    s3 = session.resource('s3')
    bucket = s3.Bucket('bucket_name')
    
    files_in_s3 = bucket.objects.all() 
    #you can print this iterable with print(list(files_in_s3))
    

    回答 7

    对象摘要:

    ObjectSummary附带有两个标识符:

    • bucket_name

    boto3 S3:ObjectSummary

    AWS S3文档中有关对象密钥的更多信息:

    对象键:

    创建对象时,请指定键名,该键名唯一标识存储桶中的对象。例如,在Amazon S3控制台(请参阅AWS管理控制台)中,突出显示存储桶时,将显示存储桶中的对象列表。这些名称是对象键。密钥的名称是一系列Unicode字符,其UTF-8编码最长为1024个字节。

    Amazon S3数据模型是一个平面结构:创建一个存储桶,该存储桶存储对象。没有子桶或子文件夹的层次结构;但是,您可以像Amazon S3控制台一样使用键名前缀和定界符来推断逻辑层次结构。Amazon S3控制台支持文件夹的概念。假设您的存储桶(由管理员创建)具有四个带有以下对象键的对象:

    开发/项目1.xls

    财务/声明1.pdf

    私人/taxdocument.pdf

    s3-dg.pdf

    参考:

    AWS S3:对象密钥

    这是一些示例代码,演示了如何获取存储桶名称和对象密钥。

    例:

    import boto3
    from pprint import pprint
    
    def main():
    
        def enumerate_s3():
            s3 = boto3.resource('s3')
            for bucket in s3.buckets.all():
                 print("Name: {}".format(bucket.name))
                 print("Creation Date: {}".format(bucket.creation_date))
                 for object in bucket.objects.all():
                     print("Object: {}".format(object))
                     print("Object bucket_name: {}".format(object.bucket_name))
                     print("Object key: {}".format(object.key))
    
        enumerate_s3()
    
    
    if __name__ == '__main__':
        main()

    ObjectSummary:

    There are two identifiers that are attached to the ObjectSummary:

    • bucket_name
    • key

    boto3 S3: ObjectSummary

    More on Object Keys from AWS S3 Documentation:

    Object Keys:

    When you create an object, you specify the key name, which uniquely identifies the object in the bucket. For example, in the Amazon S3 console (see AWS Management Console), when you highlight a bucket, a list of objects in your bucket appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long.

    The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders; however, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of folders. Suppose that your bucket (admin-created) has four objects with the following object keys:

    Development/Projects1.xls

    Finance/statement1.pdf

    Private/taxdocument.pdf

    s3-dg.pdf

    Reference:

    AWS S3: Object Keys

    Here is some example code that demonstrates how to get the bucket name and the object key.

    Example:

    import boto3
    from pprint import pprint
    
    def main():
    
        def enumerate_s3():
            s3 = boto3.resource('s3')
            for bucket in s3.buckets.all():
                 print("Name: {}".format(bucket.name))
                 print("Creation Date: {}".format(bucket.creation_date))
                 for object in bucket.objects.all():
                     print("Object: {}".format(object))
                     print("Object bucket_name: {}".format(object.bucket_name))
                     print("Object key: {}".format(object.key))
    
        enumerate_s3()
    
    
    if __name__ == '__main__':
        main()
    

    回答 8

    我只是这样做,包括身份验证方法:

    s3_client = boto3.client(
                    's3',
                    aws_access_key_id='access_key',
                    aws_secret_access_key='access_key_secret',
                    config=boto3.session.Config(signature_version='s3v4'),
                    region_name='region'
                )
    
    response = s3_client.list_objects(Bucket='bucket_name', Prefix=key)
    if ('Contents' in response):
        # Object / key exists!
        return True
    else:
        # Object / key DOES NOT exist!
        return False

    I just did it like this, including the authentication method:

    s3_client = boto3.client(
                    's3',
                    aws_access_key_id='access_key',
                    aws_secret_access_key='access_key_secret',
                    config=boto3.session.Config(signature_version='s3v4'),
                    region_name='region'
                )
    
    response = s3_client.list_objects(Bucket='bucket_name', Prefix=key)
    if ('Contents' in response):
        # Object / key exists!
        return True
    else:
        # Object / key DOES NOT exist!
        return False
    

    回答 9

    #To print all filenames in a bucket
    import boto3
    
    s3 = boto3.client('s3')
    
    def get_s3_keys(bucket):
    
        """Get a list of keys in an S3 bucket."""
        resp = s3.list_objects_v2(Bucket=bucket)
        for obj in resp['Contents']:
          files = obj['Key']
        return files
    
    
    filename = get_s3_keys('your_bucket_name')
    
    print(filename)
    
    #To print all filenames in a certain directory in a bucket
    import boto3
    
    s3 = boto3.client('s3')
    
    def get_s3_keys(bucket, prefix):
    
        """Get a list of keys in an S3 bucket."""
        resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
        for obj in resp['Contents']:
          files = obj['Key']
          print(files)
        return files
    
    
    filename = get_s3_keys('your_bucket_name', 'folder_name/sub_folder_name/')
    
    print(filename)
    #To print all filenames in a bucket
    import boto3
    
    s3 = boto3.client('s3')
    
    def get_s3_keys(bucket):
    
        """Get a list of keys in an S3 bucket."""
        resp = s3.list_objects_v2(Bucket=bucket)
        for obj in resp['Contents']:
          files = obj['Key']
        return files
    
    
    filename = get_s3_keys('your_bucket_name')
    
    print(filename)
    
    #To print all filenames in a certain directory in a bucket
    import boto3
    
    s3 = boto3.client('s3')
    
    def get_s3_keys(bucket, prefix):
    
        """Get a list of keys in an S3 bucket."""
        resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
        for obj in resp['Contents']:
          files = obj['Key']
          print(files)
        return files
    
    
    filename = get_s3_keys('your_bucket_name', 'folder_name/sub_folder_name/')
    
    print(filename)
    

    回答 10

    在上述注释之一中,@ Hephaeastus的代码几乎没有修改,编写了以下方法以列出给定路径中的文件夹和对象(文件)。与s3 ls命令类似。

    from boto3 import session
    
    def s3_ls(profile=None, bucket_name=None, folder_path=None):
        folders=[]
        files=[]
        result=dict()
        bucket_name = bucket_name
        prefix= folder_path
        session = boto3.Session(profile_name=profile)
        s3_conn   = session.client('s3')
        s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter = "/", Prefix=prefix)
        if 'Contents' not in s3_result and 'CommonPrefixes' not in s3_result:
            return []
    
        if s3_result.get('CommonPrefixes'):
            for folder in s3_result['CommonPrefixes']:
                folders.append(folder.get('Prefix'))
    
        if s3_result.get('Contents'):
            for key in s3_result['Contents']:
                files.append(key['Key'])
    
        while s3_result['IsTruncated']:
            continuation_key = s3_result['NextContinuationToken']
            s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter="/", ContinuationToken=continuation_key, Prefix=prefix)
            if s3_result.get('CommonPrefixes'):
                for folder in s3_result['CommonPrefixes']:
                    folders.append(folder.get('Prefix'))
            if s3_result.get('Contents'):
                for key in s3_result['Contents']:
                    files.append(key['Key'])
    
        if folders:
            result['folders']=sorted(folders)
        if files:
            result['files']=sorted(files)
        return result

    这将列出给定路径中的所有对象/文件夹。默认情况下,Folder_path可以保留为None,方法将列出存储桶根的立即内容。

    With little modification to @Hephaeastus ‘s code in one of the above comments, wrote the below method to list down folders and objects (files) in a given path. Works similar to s3 ls command.

    from boto3 import session
    
    def s3_ls(profile=None, bucket_name=None, folder_path=None):
        folders=[]
        files=[]
        result=dict()
        bucket_name = bucket_name
        prefix= folder_path
        session = boto3.Session(profile_name=profile)
        s3_conn   = session.client('s3')
        s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter = "/", Prefix=prefix)
        if 'Contents' not in s3_result and 'CommonPrefixes' not in s3_result:
            return []
    
        if s3_result.get('CommonPrefixes'):
            for folder in s3_result['CommonPrefixes']:
                folders.append(folder.get('Prefix'))
    
        if s3_result.get('Contents'):
            for key in s3_result['Contents']:
                files.append(key['Key'])
    
        while s3_result['IsTruncated']:
            continuation_key = s3_result['NextContinuationToken']
            s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter="/", ContinuationToken=continuation_key, Prefix=prefix)
            if s3_result.get('CommonPrefixes'):
                for folder in s3_result['CommonPrefixes']:
                    folders.append(folder.get('Prefix'))
            if s3_result.get('Contents'):
                for key in s3_result['Contents']:
                    files.append(key['Key'])
    
        if folders:
            result['folders']=sorted(folders)
        if files:
            result['files']=sorted(files)
        return result
    

    This lists down all objects / folders in a given path. Folder_path can be left as None by default and method will list the immediate contents of the root of the bucket.


    回答 11

    这是解决方案

    导入boto3

    s3 = boto3.resource(’s3’)

    BUCKET_NAME =’您的S3存储桶名称,例如’deletemetesting11′

    allFiles = s3.Bucket(BUCKET_NAME).objects.all()

    对于allFiles中的文件:print(file.key)

    Here is the solution

    import boto3
    
    s3=boto3.resource('s3')
    BUCKET_NAME = 'Your S3 Bucket Name'
    allFiles = s3.Bucket(BUCKET_NAME).objects.all()
    for file in allFiles:
        print(file.key)
    

    Security Monkey监控AWS、GCP、OpenStack和GitHub组织的资产及其随时间的变化

    Security Monkey监控您的AWS and GCP accounts有关不安全配置的策略更改和警报。支持OpenStack公共云和私有云。Security Monkey还可以监视和监控您的GitHub组织、团队和存储库

    它提供单个UI来浏览和搜索您的所有帐户、地区和云服务。猴子会记住以前的状态,并能准确地告诉你什么时候发生了变化

    Security Monkey可以扩展为custom account typescustom watcherscustom auditors,以及custom alerters

    它可以在CPython2.7上运行。众所周知,它可以在Ubuntu Linux和OS X上运行

    发展分支机构 大师级分支机构

    特别注意事项:

    Netflix对Security Monkey的支持已经减少,只对小错误进行了修复。也就是说,我们乐于接受并合并修复bug并添加新功能的请求(Pull-Request)

    🚨⚠️🥁🎺请阅读:打破1.0的更改🎺🥁⚠️🚨

    如果您是第一次升级到1.0,请查看Quickstart以及Autostarting文档,因为Security Monkey有一个新的部署模式。此外,还添加了新的IAM权限

    项目资源

    实例关系图

    组成Security Monkey的组件如下(不是特定于AWS的):

    访问图

    Security Monkey通过提供的凭据访问帐户以进行扫描(“角色假设”,如果可用)