标签归档:amazon-web-services

如何使用boto3将S3对象保存到文件

问题:如何使用boto3将S3对象保存到文件

我正在尝试使用适用于AWS的新boto3客户端做一个“ hello world” 。

我的用例非常简单:从S3获取对象并将其保存到文件中。

在boto 2.XI中,它应该是这样的:

import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')

在boto 3中。我找不到一种干净的方法来做同样的事情,所以我手动遍历了“ Streaming”对象:

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    chunk = key['Body'].read(1024*8)
    while chunk:
        f.write(chunk)
        chunk = key['Body'].read(1024*8)

要么

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    for chunk in iter(lambda: key['Body'].read(4096), b''):
        f.write(chunk)

而且效果很好。我想知道是否有任何“本机” boto3函数可以完成相同的任务?

I’m trying to do a “hello world” with new boto3 client for AWS.

The use-case I have is fairly simple: get object from S3 and save it to the file.

In boto 2.X I would do it like this:

import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')

In boto 3 . I can’t find a clean way to do the same thing, so I’m manually iterating over the “Streaming” object:

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    chunk = key['Body'].read(1024*8)
    while chunk:
        f.write(chunk)
        chunk = key['Body'].read(1024*8)

or

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    for chunk in iter(lambda: key['Body'].read(4096), b''):
        f.write(chunk)

And it works fine. I was wondering is there any “native” boto3 function that will do the same task?


回答 0

Boto3最近有一项自定义功能,可以帮助您(其中包括其他方面)。当前,它在低级S3客户端上公开,可以这样使用:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')

# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())

这些功能将自动处理读/写文件,以及并行并行处理大文件。

请注意,s3_client.download_file不会创建目录。可以将其创建为pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True)

There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')

# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())

These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.

Note that s3_client.download_file won’t create a directory. It can be created as pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True).


回答 1

boto3现在具有比客户端更好的界面:

resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)

就其本身而言,它并没有比client接受的答案好得多(尽管文档说它在失败时重试上载和下载做得更好),但考虑到资源通常更符合人体工程学(例如,s3 存储桶对象资源)比客户端方法更好),这确实使您可以停留在资源层而不必下拉。

Resources 通常,可以使用与客户端相同的方式来创建它们,并且它们采用全部或大部分相同的参数,然后将其转发给其内部客户端。

boto3 now has a nicer interface than the client:

resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)

This by itself isn’t tremendously better than the client in the accepted answer (although the docs say that it does a better job retrying uploads and downloads on failure) but considering that resources are generally more ergonomic (for example, the s3 bucket and object resources are nicer than the client methods) this does allow you to stay at the resource layer without having to drop down.

Resources generally can be created in the same way as clients, and they take all or most of the same arguments and just forward them to their internal clients.


回答 2

对于那些想模拟set_contents_from_string类似boto2方法的人,您可以尝试

import boto3
from cStringIO import StringIO

s3c = boto3.client('s3')
contents = 'My string to save to S3 object'
target_bucket = 'hello-world.by.vor'
target_file = 'data/hello.txt'
fake_handle = StringIO(contents)

# notice if you do fake_handle.read() it reads like a file handle
s3c.put_object(Bucket=target_bucket, Key=target_file, Body=fake_handle.read())

对于Python3:

在python3中,StringIO和cStringIO都消失了StringIO像这样使用导入:

from io import StringIO

要同时支持两个版本:

try:
   from StringIO import StringIO
except ImportError:
   from io import StringIO

For those of you who would like to simulate the set_contents_from_string like boto2 methods, you can try

import boto3
from cStringIO import StringIO

s3c = boto3.client('s3')
contents = 'My string to save to S3 object'
target_bucket = 'hello-world.by.vor'
target_file = 'data/hello.txt'
fake_handle = StringIO(contents)

# notice if you do fake_handle.read() it reads like a file handle
s3c.put_object(Bucket=target_bucket, Key=target_file, Body=fake_handle.read())

For Python3:

In python3 both StringIO and cStringIO are gone. Use the StringIO import like:

from io import StringIO

To support both version:

try:
   from StringIO import StringIO
except ImportError:
   from io import StringIO

回答 3

# Preface: File is json with contents: {'name': 'Android', 'status': 'ERROR'}

import boto3
import io

s3 = boto3.resource('s3')

obj = s3.Object('my-bucket', 'key-to-file.json')
data = io.BytesIO()
obj.download_fileobj(data)

# object is now a bytes string, Converting it to a dict:
new_dict = json.loads(data.getvalue().decode("utf-8"))

print(new_dict['status']) 
# Should print "Error"
# Preface: File is json with contents: {'name': 'Android', 'status': 'ERROR'}

import boto3
import io

s3 = boto3.resource('s3')

obj = s3.Object('my-bucket', 'key-to-file.json')
data = io.BytesIO()
obj.download_fileobj(data)

# object is now a bytes string, Converting it to a dict:
new_dict = json.loads(data.getvalue().decode("utf-8"))

print(new_dict['status']) 
# Should print "Error"

回答 4

当您想要读取与默认配置不同的文件时,请mpu.aws.s3_download(s3path, destination)直接使用或复制粘贴的代码:

def s3_download(source, destination,
                exists_strategy='raise',
                profile_name=None):
    """
    Copy a file from an S3 source to a local destination.

    Parameters
    ----------
    source : str
        Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
    destination : str
    exists_strategy : {'raise', 'replace', 'abort'}
        What is done when the destination already exists?
    profile_name : str, optional
        AWS profile

    Raises
    ------
    botocore.exceptions.NoCredentialsError
        Botocore is not able to find your credentials. Either specify
        profile_name or add the environment variables AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
        See https://boto3.readthedocs.io/en/latest/guide/configuration.html
    """
    exists_strategies = ['raise', 'replace', 'abort']
    if exists_strategy not in exists_strategies:
        raise ValueError('exists_strategy \'{}\' is not in {}'
                         .format(exists_strategy, exists_strategies))
    session = boto3.Session(profile_name=profile_name)
    s3 = session.resource('s3')
    bucket_name, key = _s3_path_split(source)
    if os.path.isfile(destination):
        if exists_strategy is 'raise':
            raise RuntimeError('File \'{}\' already exists.'
                               .format(destination))
        elif exists_strategy is 'abort':
            return
    s3.Bucket(bucket_name).download_file(key, destination)

from collections import namedtuple

S3Path = namedtuple("S3Path", ["bucket_name", "key"])


def _s3_path_split(s3_path):
    """
    Split an S3 path into bucket and key.

    Parameters
    ----------
    s3_path : str

    Returns
    -------
    splitted : (str, str)
        (bucket, key)

    Examples
    --------
    >>> _s3_path_split('s3://my-bucket/foo/bar.jpg')
    S3Path(bucket_name='my-bucket', key='foo/bar.jpg')
    """
    if not s3_path.startswith("s3://"):
        raise ValueError(
            "s3_path is expected to start with 's3://', " "but was {}"
            .format(s3_path)
        )
    bucket_key = s3_path[len("s3://"):]
    bucket_name, key = bucket_key.split("/", 1)
    return S3Path(bucket_name, key)

When you want to read a file with a different configuration than the default one, feel free to use either mpu.aws.s3_download(s3path, destination) directly or the copy-pasted code:

def s3_download(source, destination,
                exists_strategy='raise',
                profile_name=None):
    """
    Copy a file from an S3 source to a local destination.

    Parameters
    ----------
    source : str
        Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
    destination : str
    exists_strategy : {'raise', 'replace', 'abort'}
        What is done when the destination already exists?
    profile_name : str, optional
        AWS profile

    Raises
    ------
    botocore.exceptions.NoCredentialsError
        Botocore is not able to find your credentials. Either specify
        profile_name or add the environment variables AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
        See https://boto3.readthedocs.io/en/latest/guide/configuration.html
    """
    exists_strategies = ['raise', 'replace', 'abort']
    if exists_strategy not in exists_strategies:
        raise ValueError('exists_strategy \'{}\' is not in {}'
                         .format(exists_strategy, exists_strategies))
    session = boto3.Session(profile_name=profile_name)
    s3 = session.resource('s3')
    bucket_name, key = _s3_path_split(source)
    if os.path.isfile(destination):
        if exists_strategy is 'raise':
            raise RuntimeError('File \'{}\' already exists.'
                               .format(destination))
        elif exists_strategy is 'abort':
            return
    s3.Bucket(bucket_name).download_file(key, destination)

from collections import namedtuple

S3Path = namedtuple("S3Path", ["bucket_name", "key"])


def _s3_path_split(s3_path):
    """
    Split an S3 path into bucket and key.

    Parameters
    ----------
    s3_path : str

    Returns
    -------
    splitted : (str, str)
        (bucket, key)

    Examples
    --------
    >>> _s3_path_split('s3://my-bucket/foo/bar.jpg')
    S3Path(bucket_name='my-bucket', key='foo/bar.jpg')
    """
    if not s3_path.startswith("s3://"):
        raise ValueError(
            "s3_path is expected to start with 's3://', " "but was {}"
            .format(s3_path)
        )
    bucket_key = s3_path[len("s3://"):]
    bucket_name, key = bucket_key.split("/", 1)
    return S3Path(bucket_name, key)

回答 5

注意:我假设您已经分别配置了身份验证。下面的代码是从S3存储桶下载单个对象。

import boto3

#initiate s3 client 
s3 = boto3.resource('s3')

#Download object to the file    
s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')

Note: I’m assuming you have configured authentication separately. Below code is to download the single object from the S3 bucket.

import boto3

#initiate s3 client 
s3 = boto3.resource('s3')

#Download object to the file    
s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')

使用boto3连接到CloudFront时如何选择AWS配置文件

问题:使用boto3连接到CloudFront时如何选择AWS配置文件

我正在使用Boto 3 python库,并想连接到AWS CloudFront。我需要指定正确的AWS Profile(AWS凭证),但是在查看官方文档时,我看不到指定它的方法。

我正在使用代码初始化客户端: client = boto3.client('cloudfront')

但是,这导致它使用默认配置文件进行连接。我找不到可以指定要使用的配置文件的方法。

I am using the Boto 3 python library, and want to connect to AWS CloudFront. I need to specify the correct AWS Profile (AWS Credentials), but looking at the official documentation, I see no way to specify it.

I am initializing the client using the code: client = boto3.client('cloudfront')

However, this results in it using the default profile to connect. I couldn’t find a method where I can specify which profile to use.


回答 0

我认为文档在展示如何执行此操作方面并不出色。一段时间以来,它一直是受支持的功能,并且此pull request中有一些细节。

因此,有三种不同的方法可以做到这一点:

选项A)使用配置文件创建新会话

    dev = boto3.session.Session(profile_name='dev')

选项B)在代码中更改默认会话的配置文件

    boto3.setup_default_session(profile_name='dev')

选项C)使用环境变量更改默认会话的配置文件

    $ AWS_PROFILE=dev ipython
    >>> import boto3
    >>> s3dev = boto3.resource('s3')

I think the docs aren’t wonderful at exposing how to do this. It has been a supported feature for some time, however, and there are some details in this pull request.

So there are three different ways to do this:

Option A) Create a new session with the profile

    dev = boto3.session.Session(profile_name='dev')

Option B) Change the profile of the default session in code

    boto3.setup_default_session(profile_name='dev')

Option C) Change the profile of the default session with an environment variable

    $ AWS_PROFILE=dev ipython
    >>> import boto3
    >>> s3dev = boto3.resource('s3')

回答 1

这样做以使用名称为’dev’的配置文件:

session = boto3.session.Session(profile_name='dev')
s3 = session.resource('s3')
for bucket in s3.buckets.all():
    print(bucket.name)

Do this to use a profile with name ‘dev’:

session = boto3.session.Session(profile_name='dev')
s3 = session.resource('s3')
for bucket in s3.buckets.all():
    print(bucket.name)

回答 2

boto3文档的本部分非常有用。

这对我有用:

session = boto3.Session(profile_name='dev')
client = session.client('cloudfront')

This section of the boto3 documentation is helpful.

Here’s what worked for me:

session = boto3.Session(profile_name='dev')
client = session.client('cloudfront')

回答 3

只需在客户端调用之前将配置文件添加到会话配置即可。 boto3.session.Session(profile_name='YOUR_PROFILE_NAME').client('cloudwatch')

Just add profile to session configuration before client call. boto3.session.Session(profile_name='YOUR_PROFILE_NAME').client('cloudwatch')


如何在AWS EC2实例上安装Python 3?

问题:如何在AWS EC2实例上安装Python 3?

我正在尝试在AWS EC2实例上安装python 3.x,并且:

sudo yum install python3

不起作用:

No package python3 available.

我已经四处搜寻,找不到其他人遇到这个问题,所以我在这里问。我必须手动下载并安装它吗?

I’m trying to install python 3.x on an AWS EC2 instance and:

sudo yum install python3

doesn’t work:

No package python3 available.

I’ve googled around and I can’t find anyone else who has this problem so I’m asking here. Do I have to manually download and install it?


回答 0

如果你做一个

sudo yum list | grep python3

您会看到,虽然它们没有“ python3”软件包,但确实有“ python34”软件包或更新的发行版,例如“ python36”。安装起来很简单:

sudo yum install python34 python34-pip

If you do a

sudo yum list | grep python3

you will see that while they don’t have a “python3” package, they do have a “python34” package, or a more recent release, such as “python36”. Installing it is as easy as:

sudo yum install python34 python34-pip

回答 1

注意:自2018年末以来,这对于Amazon Linux 2的当前版本可能已过时(请参阅评论),您现在可以通过进行直接安装yum install python3

在Amazon Linux 2中python3[4-6]默认的yum存储库中没有,而是Amazon Extras库

sudo amazon-linux-extras install python3

如果要使用它设置隔离的虚拟环境,请执行以下操作:使用yum install‘d virtualenv工具似乎无法可靠地工作。

virtualenv --python=python3 my_venv

调用venv模块/工具不太麻烦,您可以python3 --version事先检查一下它是否是您想要/期望的。

python3 -m venv my_venv

它可以安装的其他东西(截至18 Jan 18的版本):

[ec2-user@x ~]$ amazon-linux-extras list
  0  ansible2   disabled  [ =2.4.2 ]
  1  emacs   disabled  [ =25.3 ]
  2  memcached1.5   disabled  [ =1.5.1 ]
  3  nginx1.12   disabled  [ =1.12.2 ]
  4  postgresql9.6   disabled  [ =9.6.6 ]
  5  python3=latest  enabled  [ =3.6.2 ]
  6  redis4.0   disabled  [ =4.0.5 ]
  7  R3.4   disabled  [ =3.4.3 ]
  8  rust1   disabled  [ =1.22.1 ]
  9  vim   disabled  [ =8.0 ]
 10  golang1.9   disabled  [ =1.9.2 ]
 11  ruby2.4   disabled  [ =2.4.2 ]
 12  nano   disabled  [ =2.9.1 ]
 13  php7.2   disabled  [ =7.2.0 ]
 14  lamp-mariadb10.2-php7.2   disabled  [ =10.2.10_7.2.0 ]

Note: This may be obsolete for current versions of Amazon Linux 2 since late 2018 (see comments), you can now directly install it via yum install python3.

In Amazon Linux 2, there isn’t a python3[4-6] in the default yum repos, instead there’s the Amazon Extras Library.

sudo amazon-linux-extras install python3

If you want to set up isolated virtual environments with it; using yum install‘d virtualenv tools don’t seem to reliably work.

virtualenv --python=python3 my_venv

Calling the venv module/tool is less finicky, and you could double check it’s what you want/expect with python3 --version beforehand.

python3 -m venv my_venv

Other things it can install (versions as of 18 Jan 18):

[ec2-user@x ~]$ amazon-linux-extras list
  0  ansible2   disabled  [ =2.4.2 ]
  1  emacs   disabled  [ =25.3 ]
  2  memcached1.5   disabled  [ =1.5.1 ]
  3  nginx1.12   disabled  [ =1.12.2 ]
  4  postgresql9.6   disabled  [ =9.6.6 ]
  5  python3=latest  enabled  [ =3.6.2 ]
  6  redis4.0   disabled  [ =4.0.5 ]
  7  R3.4   disabled  [ =3.4.3 ]
  8  rust1   disabled  [ =1.22.1 ]
  9  vim   disabled  [ =8.0 ]
 10  golang1.9   disabled  [ =1.9.2 ]
 11  ruby2.4   disabled  [ =2.4.2 ]
 12  nano   disabled  [ =2.9.1 ]
 13  php7.2   disabled  [ =7.2.0 ]
 14  lamp-mariadb10.2-php7.2   disabled  [ =10.2.10_7.2.0 ]

回答 2

这是我用来为其他任何想要这样做的人手动安装python3的步骤,因为它不是超级简单的方法。编辑:使用yum软件包管理器几乎可以肯定更容易(请参阅其他答案)。

请注意,您可能需要sudo yum groupinstall 'Development Tools'先执行此操作,否则不会安装pip。

wget https://www.python.org/ftp/python/3.4.2/Python-3.4.2.tgz
tar zxvf Python-3.4.2.tgz
cd Python-3.4.2
sudo yum install gcc
./configure --prefix=/opt/python3
make
sudo yum install openssl-devel
sudo make install
sudo ln -s /opt/python3/bin/python3 /usr/bin/python3
python3 (should start the interpreter if it's worked (quit() to exit)

Here are the steps I used to manually install python3 for anyone else who wants to do it as it’s not super straight forward. EDIT: It’s almost certainly easier to use the yum package manager (see other answers).

Note, you’ll probably want to do sudo yum groupinstall 'Development Tools' before doing this otherwise pip won’t install.

wget https://www.python.org/ftp/python/3.4.2/Python-3.4.2.tgz
tar zxvf Python-3.4.2.tgz
cd Python-3.4.2
sudo yum install gcc
./configure --prefix=/opt/python3
make
sudo yum install openssl-devel
sudo make install
sudo ln -s /opt/python3/bin/python3 /usr/bin/python3
python3 (should start the interpreter if it's worked (quit() to exit)

回答 3

EC2(在Amazon Linux AMI上)当前支持python3.4和python3.5。

sudo yum install python35
sudo yum install python35-pip

EC2 (on the Amazon Linux AMI) currently supports python3.4 and python3.5.

sudo yum install python35
sudo yum install python35-pip

回答 4

自Amazon Linux版本2017.09起,python 3.6现在可用:

sudo yum install python36 python36-virtualenv python36-pip

有关更多信息和其他软件包,请参见发行说明

As of Amazon Linux version 2017.09 python 3.6 is now available:

sudo yum install python36 python36-virtualenv python36-pip

See the Release Notes for more info and other packages


回答 5

Amazon Linux现在支持python36。

python36-pip不可用。因此需要遵循不同的路线。

sudo yum install python36 python36-devel python36-libs python36-tools

# If you like to have pip3.6:
curl -O https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py

Amazon Linux now supports python36.

python36-pip is not available. So need to follow a different route.

sudo yum install python36 python36-devel python36-libs python36-tools

# If you like to have pip3.6:
curl -O https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py

回答 6

正如@NickT所说,Amazon Linux 2的默认yum存储库中没有python3 [4-6]今天为止它使用3.7,在这里查看所有答案,我们可以说它会随着时间的变化而变化。

我在Amazon Linux 2上寻找python3.6,但amazon-linux-extras显示了很多选项,但根本没有python。实际上,您可以尝试在epel回购中找到您知道的版本:

sudo amazon-linux-extras install epel

yum search python | grep "^python3..x8"

python34.x86_64 : Version 3 of the Python programming language aka Python 3000
python36.x86_64 : Interpreter of the Python programming language

As @NickT said, there’s no python3[4-6] in the default yum repos in Amazon Linux 2, as of today it uses 3.7 and looking at all answers here we can say it will be changed over time.

I was looking for python3.6 on Amazon Linux 2 but amazon-linux-extras shows a lot of options but no python at all. in fact, you can try to find the version you know in epel repo:

sudo amazon-linux-extras install epel

yum search python | grep "^python3..x8"

python34.x86_64 : Version 3 of the Python programming language aka Python 3000
python36.x86_64 : Interpreter of the Python programming language

回答 7

除了可以用于该问题的所有答案之外,我还要添加在运行CentOS 7的AWS EC2实例上安装Python3的步骤。您可以在此链接中找到完整的详细信息。

https://aws-labs.com/install-python-3-centos-7-2/

首先,我们需要启用SCL。SCL是一个社区项目,可让您在同一系统上构建,安装和使用多个版本的软件,而不会影响系统默认软件包。

sudo yum install centos-release-scl

现在我们有了SCL存储库,我们可以安装python3

sudo yum install rh-python36

要访问Python 3.6,您需要使用Software Collection scl工具启动一个新的shell实例:

scl enable rh-python36 bash

如果现在检查Python版本,您会注意到Python 3.6是默认版本

python --version

需要指出的是,仅在此Shell会话中,Python 3.6是默认的Python版本。如果退出会话或从另一个终端打开新会话,则Python 2.7将是默认的Python版本。

现在,输入以下命令安装python开发工具:

sudo yum groupinstall Development Tools

现在创建一个虚拟环境,以便不会弄乱默认的python包。

mkdir ~/my_new_project
cd ~/my_new_project
python -m venv my_project_venv

要使用此虚拟环境,

source my_project_venv/bin/activate

现在,您已经使用python3设置了虚拟环境。

Adding to all the answers already available for this question, I would like to add the steps I followed to install Python3 on AWS EC2 instance running CentOS 7. You can find the entire details at this link.

https://aws-labs.com/install-python-3-centos-7-2/

First, we need to enable SCL. SCL is a community project that allows you to build, install, and use multiple versions of software on the same system, without affecting system default packages.

sudo yum install centos-release-scl

Now that we have SCL repository, we can install the python3

sudo yum install rh-python36

To access Python 3.6 you need to launch a new shell instance using the Software Collection scl tool:

scl enable rh-python36 bash

If you check the Python version now you’ll notice that Python 3.6 is the default version

python --version

It is important to point out that Python 3.6 is the default Python version only in this shell session. If you exit the session or open a new session from another terminal Python 2.7 will be the default Python version.

Now, Install the python development tools by typing:

sudo yum groupinstall ‘Development Tools’

Now create a virtual environment so that the default python packages don’t get messed up.

mkdir ~/my_new_project
cd ~/my_new_project
python -m venv my_project_venv

To use this virtual environment,

source my_project_venv/bin/activate

Now, you have your virtual environment set up with python3.


回答 8

在Debian衍生产品(例如Ubuntu)上,使用apt。检查apt储存库中可用的Python版本。然后,运行类似于以下内容的命令,以替换正确的程序包名称:

sudo apt-get install python3

在Red Hat及其衍生产品上,使用yum。检查yum存储库中可用的Python版本。然后,运行类似于以下内容的命令,以替换正确的程序包名称:

sudo yum install python36

在SUSE及其衍生物上,请使用zypper。检查存储库中可用的Python版本。然后。运行类似于以下内容的命令,以替换正确的软件包名称:

sudo zypper install python3

On Debian derivatives such as Ubuntu, use apt. Check the apt repository for the versions of Python available to you. Then, run a command similar to the following, substituting the correct package name:

sudo apt-get install python3

On Red Hat and derivatives, use yum. Check the yum repository for the versions of Python available to you. Then, run a command similar to the following, substituting the correct package name:

sudo yum install python36

On SUSE and derivatives, use zypper. Check the repository for the versions of Python available to you. Then. run a command similar to the following, substituting the correct package name:

sudo zypper install python3

如何使用Boto将文件上传到S3存储桶中的目录

问题:如何使用Boto将文件上传到S3存储桶中的目录

我想使用python在s3存储桶中复制文件。

例如:我的存储桶名称=测试。在存储桶中,我有2个文件夹名称为“ dump”和“ input”。现在,我想使用python将文件从本地目录复制到S3“转储”文件夹…有人可以帮助我吗?

I want to copy a file in s3 bucket using python.

Ex : I have bucket name = test. And in the bucket, I have 2 folders name “dump” & “input”. Now I want to copy a file from local directory to S3 “dump” folder using python… Can anyone help me?


回答 0

试试这个…

import boto
import boto.s3
import sys
from boto.s3.key import Key

AWS_ACCESS_KEY_ID = ''
AWS_SECRET_ACCESS_KEY = ''

bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY)


bucket = conn.create_bucket(bucket_name,
    location=boto.s3.connection.Location.DEFAULT)

testfile = "replace this with an actual filename"
print 'Uploading %s to Amazon S3 bucket %s' % \
   (testfile, bucket_name)

def percent_cb(complete, total):
    sys.stdout.write('.')
    sys.stdout.flush()


k = Key(bucket)
k.key = 'my test file'
k.set_contents_from_filename(testfile,
    cb=percent_cb, num_cb=10)

[更新]我不是pythonist,所以感谢您对import语句的注意。另外,我不建议将凭据放入您自己的源代码中。如果您在AWS内部运行此代码,请使用带有实例配置文件(http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html)的IAM凭证,并在其中保留相同的行为。您的开发/测试环境,请使用类似AdRoll的Hologram(https://github.com/AdRoll/hologram

Try this…

import boto
import boto.s3
import sys
from boto.s3.key import Key

AWS_ACCESS_KEY_ID = ''
AWS_SECRET_ACCESS_KEY = ''

bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY)


bucket = conn.create_bucket(bucket_name,
    location=boto.s3.connection.Location.DEFAULT)

testfile = "replace this with an actual filename"
print 'Uploading %s to Amazon S3 bucket %s' % \
   (testfile, bucket_name)

def percent_cb(complete, total):
    sys.stdout.write('.')
    sys.stdout.flush()


k = Key(bucket)
k.key = 'my test file'
k.set_contents_from_filename(testfile,
    cb=percent_cb, num_cb=10)

[UPDATE] I am not a pythonist, so thanks for the heads up about the import statements. Also, I’d not recommend placing credentials inside your own source code. If you are running this inside AWS use IAM Credentials with Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html), and to keep the same behaviour in your Dev/Test environment, use something like Hologram from AdRoll (https://github.com/AdRoll/hologram)


回答 1

无需使其变得如此复杂:

s3_connection = boto.connect_s3()
bucket = s3_connection.get_bucket('your bucket name')
key = boto.s3.key.Key(bucket, 'some_file.zip')
with open('some_file.zip') as f:
    key.send_file(f)

No need to make it that complicated:

s3_connection = boto.connect_s3()
bucket = s3_connection.get_bucket('your bucket name')
key = boto.s3.key.Key(bucket, 'some_file.zip')
with open('some_file.zip') as f:
    key.send_file(f)

回答 2

import boto3

s3 = boto3.resource('s3')
BUCKET = "test"

s3.Bucket(BUCKET).upload_file("your/local/file", "dump/file")
import boto3

s3 = boto3.resource('s3')
BUCKET = "test"

s3.Bucket(BUCKET).upload_file("your/local/file", "dump/file")

回答 3

我用了它,实现起来很简单

import tinys3

conn = tinys3.Connection('S3_ACCESS_KEY','S3_SECRET_KEY',tls=True)

f = open('some_file.zip','rb')
conn.upload('some_file.zip',f,'my_bucket')

https://www.smore.com/labs/tinys3/

I used this and it is very simple to implement

import tinys3

conn = tinys3.Connection('S3_ACCESS_KEY','S3_SECRET_KEY',tls=True)

f = open('some_file.zip','rb')
conn.upload('some_file.zip',f,'my_bucket')

https://www.smore.com/labs/tinys3/


回答 4

from boto3.s3.transfer import S3Transfer
import boto3
#have all the variables populated which are required below
client = boto3.client('s3', aws_access_key_id=access_key,aws_secret_access_key=secret_key)
transfer = S3Transfer(client)
transfer.upload_file(filepath, bucket_name, folder_name+"/"+filename)
from boto3.s3.transfer import S3Transfer
import boto3
#have all the variables populated which are required below
client = boto3.client('s3', aws_access_key_id=access_key,aws_secret_access_key=secret_key)
transfer = S3Transfer(client)
transfer.upload_file(filepath, bucket_name, folder_name+"/"+filename)

回答 5

在具有凭据的会话中将文件上传到s3。

import boto3

session = boto3.Session(
    aws_access_key_id='AWS_ACCESS_KEY_ID',
    aws_secret_access_key='AWS_SECRET_ACCESS_KEY',
)
s3 = session.resource('s3')
# Filename - File to upload
# Bucket - Bucket to upload to (the top level directory under AWS S3)
# Key - S3 object name (can contain subdirectories). If not specified then file_name is used
s3.meta.client.upload_file(Filename='input_file_path', Bucket='bucket_name', Key='s3_output_key')

Upload file to s3 within a session with credentials.

import boto3

session = boto3.Session(
    aws_access_key_id='AWS_ACCESS_KEY_ID',
    aws_secret_access_key='AWS_SECRET_ACCESS_KEY',
)
s3 = session.resource('s3')
# Filename - File to upload
# Bucket - Bucket to upload to (the top level directory under AWS S3)
# Key - S3 object name (can contain subdirectories). If not specified then file_name is used
s3.meta.client.upload_file(Filename='input_file_path', Bucket='bucket_name', Key='s3_output_key')

回答 6

这也将起作用:

import os 
import boto
import boto.s3.connection
from boto.s3.key import Key

try:

    conn = boto.s3.connect_to_region('us-east-1',
    aws_access_key_id = 'AWS-Access-Key',
    aws_secret_access_key = 'AWS-Secrete-Key',
    # host = 's3-website-us-east-1.amazonaws.com',
    # is_secure=True,               # uncomment if you are not using ssl
    calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )

    bucket = conn.get_bucket('YourBucketName')
    key_name = 'FileToUpload'
    path = 'images/holiday' #Directory Under which file should get upload
    full_key_name = os.path.join(path, key_name)
    k = bucket.new_key(full_key_name)
    k.set_contents_from_filename(key_name)

except Exception,e:
    print str(e)
    print "error"   

This will also work:

import os 
import boto
import boto.s3.connection
from boto.s3.key import Key

try:

    conn = boto.s3.connect_to_region('us-east-1',
    aws_access_key_id = 'AWS-Access-Key',
    aws_secret_access_key = 'AWS-Secrete-Key',
    # host = 's3-website-us-east-1.amazonaws.com',
    # is_secure=True,               # uncomment if you are not using ssl
    calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )

    bucket = conn.get_bucket('YourBucketName')
    key_name = 'FileToUpload'
    path = 'images/holiday' #Directory Under which file should get upload
    full_key_name = os.path.join(path, key_name)
    k = bucket.new_key(full_key_name)
    k.set_contents_from_filename(key_name)

except Exception,e:
    print str(e)
    print "error"   

回答 7

这是三班轮。只需按照boto3文档中的说明进行操作

import boto3
s3 = boto3.resource(service_name = 's3')
s3.meta.client.upload_file(Filename = 'C:/foo/bar/baz.filetype', Bucket = 'yourbucketname', Key = 'baz.filetype')

一些重要的论据是:

参数:

  • 文件名str)-要上传的文件的路径。
  • 存储桶str)-要上传到的存储桶的名称。
  • str)-您要分配给s3存储桶中文件的的名称。该名称可以与文件名相同,也可以与您选择的名称不同,但是文件类型应保持不变。

    注意:我假设您已按照boto3文档中最佳配置做法的~\.aws建议将凭据保存在文件夹中。

  • This is a three liner. Just follow the instructions on the boto3 documentation.

    import boto3
    s3 = boto3.resource(service_name = 's3')
    s3.meta.client.upload_file(Filename = 'C:/foo/bar/baz.filetype', Bucket = 'yourbucketname', Key = 'baz.filetype')
    

    Some important arguments are:

    Parameters:

  • Filename (str) — The path to the file to upload.
  • Bucket (str) — The name of the bucket to upload to.
  • Key (str) — The name of the that you want to assign to your file in your s3 bucket. This could be the same as the name of the file or a different name of your choice but the filetype should remain the same.

    Note: I assume that you have saved your credentials in a ~\.aws folder as suggested in the best configuration practices in the boto3 documentation.


  • 回答 8

    import boto
    from boto.s3.key import Key
    
    AWS_ACCESS_KEY_ID = ''
    AWS_SECRET_ACCESS_KEY = ''
    END_POINT = ''                          # eg. us-east-1
    S3_HOST = ''                            # eg. s3.us-east-1.amazonaws.com
    BUCKET_NAME = 'test'        
    FILENAME = 'upload.txt'                
    UPLOADED_FILENAME = 'dumps/upload.txt'
    # include folders in file path. If it doesn't exist, it will be created
    
    s3 = boto.s3.connect_to_region(END_POINT,
                               aws_access_key_id=AWS_ACCESS_KEY_ID,
                               aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
                               host=S3_HOST)
    
    bucket = s3.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = UPLOADED_FILENAME
    k.set_contents_from_filename(FILENAME)
    import boto
    from boto.s3.key import Key
    
    AWS_ACCESS_KEY_ID = ''
    AWS_SECRET_ACCESS_KEY = ''
    END_POINT = ''                          # eg. us-east-1
    S3_HOST = ''                            # eg. s3.us-east-1.amazonaws.com
    BUCKET_NAME = 'test'        
    FILENAME = 'upload.txt'                
    UPLOADED_FILENAME = 'dumps/upload.txt'
    # include folders in file path. If it doesn't exist, it will be created
    
    s3 = boto.s3.connect_to_region(END_POINT,
                               aws_access_key_id=AWS_ACCESS_KEY_ID,
                               aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
                               host=S3_HOST)
    
    bucket = s3.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = UPLOADED_FILENAME
    k.set_contents_from_filename(FILENAME)
    

    回答 9

    使用boto3

    import logging
    import boto3
    from botocore.exceptions import ClientError
    
    
    def upload_file(file_name, bucket, object_name=None):
        """Upload a file to an S3 bucket
    
        :param file_name: File to upload
        :param bucket: Bucket to upload to
        :param object_name: S3 object name. If not specified then file_name is used
        :return: True if file was uploaded, else False
        """
    
        # If S3 object_name was not specified, use file_name
        if object_name is None:
            object_name = file_name
    
        # Upload the file
        s3_client = boto3.client('s3')
        try:
            response = s3_client.upload_file(file_name, bucket, object_name)
        except ClientError as e:
            logging.error(e)
            return False
        return True

    有关更多信息:-https : //boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html

    Using boto3

    import logging
    import boto3
    from botocore.exceptions import ClientError
    
    
    def upload_file(file_name, bucket, object_name=None):
        """Upload a file to an S3 bucket
    
        :param file_name: File to upload
        :param bucket: Bucket to upload to
        :param object_name: S3 object name. If not specified then file_name is used
        :return: True if file was uploaded, else False
        """
    
        # If S3 object_name was not specified, use file_name
        if object_name is None:
            object_name = file_name
    
        # Upload the file
        s3_client = boto3.client('s3')
        try:
            response = s3_client.upload_file(file_name, bucket, object_name)
        except ClientError as e:
            logging.error(e)
            return False
        return True
    

    For more:- https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html


    回答 10

    对于上传文件夹示例,如下代码和S3文件夹图片

    import boto
    import boto.s3
    import boto.s3.connection
    import os.path
    import sys    
    
    # Fill in info on data to upload
    # destination bucket name
    bucket_name = 'willie20181121'
    # source directory
    sourceDir = '/home/willie/Desktop/x/'  #Linux Path
    # destination directory name (on s3)
    destDir = '/test1/'   #S3 Path
    
    #max size in bytes before uploading in parts. between 1 and 5 GB recommended
    MAX_SIZE = 20 * 1000 * 1000
    #size of parts when uploading in parts
    PART_SIZE = 6 * 1000 * 1000
    
    access_key = 'MPBVAQ*******IT****'
    secret_key = '11t63yDV***********HgUcgMOSN*****'
    
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            host = '******.org.tw',
            is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    bucket = conn.create_bucket(bucket_name,
            location=boto.s3.connection.Location.DEFAULT)
    
    
    uploadFileNames = []
    for (sourceDir, dirname, filename) in os.walk(sourceDir):
        uploadFileNames.extend(filename)
        break
    
    def percent_cb(complete, total):
        sys.stdout.write('.')
        sys.stdout.flush()
    
    for filename in uploadFileNames:
        sourcepath = os.path.join(sourceDir + filename)
        destpath = os.path.join(destDir, filename)
        print ('Uploading %s to Amazon S3 bucket %s' % \
               (sourcepath, bucket_name))
    
        filesize = os.path.getsize(sourcepath)
        if filesize > MAX_SIZE:
            print ("multipart upload")
            mp = bucket.initiate_multipart_upload(destpath)
            fp = open(sourcepath,'rb')
            fp_num = 0
            while (fp.tell() < filesize):
                fp_num += 1
                print ("uploading part %i" %fp_num)
                mp.upload_part_from_file(fp, fp_num, cb=percent_cb, num_cb=10, size=PART_SIZE)
    
            mp.complete_upload()
    
        else:
            print ("singlepart upload")
            k = boto.s3.key.Key(bucket)
            k.key = destpath
            k.set_contents_from_filename(sourcepath,
                    cb=percent_cb, num_cb=10)

    PS:有关更多参考URL

    For upload folder example as following code and S3 folder picture

    import boto
    import boto.s3
    import boto.s3.connection
    import os.path
    import sys    
    
    # Fill in info on data to upload
    # destination bucket name
    bucket_name = 'willie20181121'
    # source directory
    sourceDir = '/home/willie/Desktop/x/'  #Linux Path
    # destination directory name (on s3)
    destDir = '/test1/'   #S3 Path
    
    #max size in bytes before uploading in parts. between 1 and 5 GB recommended
    MAX_SIZE = 20 * 1000 * 1000
    #size of parts when uploading in parts
    PART_SIZE = 6 * 1000 * 1000
    
    access_key = 'MPBVAQ*******IT****'
    secret_key = '11t63yDV***********HgUcgMOSN*****'
    
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            host = '******.org.tw',
            is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    bucket = conn.create_bucket(bucket_name,
            location=boto.s3.connection.Location.DEFAULT)
    
    
    uploadFileNames = []
    for (sourceDir, dirname, filename) in os.walk(sourceDir):
        uploadFileNames.extend(filename)
        break
    
    def percent_cb(complete, total):
        sys.stdout.write('.')
        sys.stdout.flush()
    
    for filename in uploadFileNames:
        sourcepath = os.path.join(sourceDir + filename)
        destpath = os.path.join(destDir, filename)
        print ('Uploading %s to Amazon S3 bucket %s' % \
               (sourcepath, bucket_name))
    
        filesize = os.path.getsize(sourcepath)
        if filesize > MAX_SIZE:
            print ("multipart upload")
            mp = bucket.initiate_multipart_upload(destpath)
            fp = open(sourcepath,'rb')
            fp_num = 0
            while (fp.tell() < filesize):
                fp_num += 1
                print ("uploading part %i" %fp_num)
                mp.upload_part_from_file(fp, fp_num, cb=percent_cb, num_cb=10, size=PART_SIZE)
    
            mp.complete_upload()
    
        else:
            print ("singlepart upload")
            k = boto.s3.key.Key(bucket)
            k.key = destpath
            k.set_contents_from_filename(sourcepath,
                    cb=percent_cb, num_cb=10)
    

    PS: For more reference URL


    回答 11

    xmlstr = etree.tostring(listings,  encoding='utf8', method='xml')
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            # host = '<bucketName>.s3.amazonaws.com',
            host = 'bycket.s3.amazonaws.com',
            #is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    conn.auth_region_name = 'us-west-1'
    
    bucket = conn.get_bucket('resources', validate=False)
    key= bucket.get_key('filename.txt')
    key.set_contents_from_string("SAMPLE TEXT")
    key.set_canned_acl('public-read')
    xmlstr = etree.tostring(listings,  encoding='utf8', method='xml')
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            # host = '<bucketName>.s3.amazonaws.com',
            host = 'bycket.s3.amazonaws.com',
            #is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    conn.auth_region_name = 'us-west-1'
    
    bucket = conn.get_bucket('resources', validate=False)
    key= bucket.get_key('filename.txt')
    key.set_contents_from_string("SAMPLE TEXT")
    key.set_canned_acl('public-read')
    

    回答 12

    我觉得有些东西还需要点命令:

    import boto3
    from pprint import pprint
    from botocore.exceptions import NoCredentialsError
    
    
    class S3(object):
        BUCKET = "test"
        connection = None
    
        def __init__(self):
            try:
                vars = get_s3_credentials("aws")
                self.connection = boto3.resource('s3', 'aws_access_key_id',
                                                 'aws_secret_access_key')
            except(Exception) as error:
                print(error)
                self.connection = None
    
    
        def upload_file(self, file_to_upload_path, file_name):
            if file_to_upload is None or file_name is None: return False
            try:
                pprint(file_to_upload)
                file_name = "your-folder-inside-s3/{0}".format(file_name)
                self.connection.Bucket(self.BUCKET).upload_file(file_to_upload_path, 
                                                                          file_name)
                print("Upload Successful")
                return True
    
            except FileNotFoundError:
                print("The file was not found")
                return False
    
            except NoCredentialsError:
                print("Credentials not available")
                return False
    
    

    这里有三个重要的变量,BUCKET const,file_to_uploadfile_name

    BUCKET:是您的S3存储桶的名称

    file_to_upload_path:必须是您要上传的文件的路径

    file_name:是存储桶中生成的文件和路径(这是您添加文件夹或其他内容的位置)

    有很多方法,但是您可以在这样的另一个脚本中重用此代码

    import S3
    
    def some_function():
        S3.S3().upload_file(path_to_file, final_file_name)

    I have something that seems to me has a bit more order:

    import boto3
    from pprint import pprint
    from botocore.exceptions import NoCredentialsError
    
    
    class S3(object):
        BUCKET = "test"
        connection = None
    
        def __init__(self):
            try:
                vars = get_s3_credentials("aws")
                self.connection = boto3.resource('s3', 'aws_access_key_id',
                                                 'aws_secret_access_key')
            except(Exception) as error:
                print(error)
                self.connection = None
    
    
        def upload_file(self, file_to_upload_path, file_name):
            if file_to_upload is None or file_name is None: return False
            try:
                pprint(file_to_upload)
                file_name = "your-folder-inside-s3/{0}".format(file_name)
                self.connection.Bucket(self.BUCKET).upload_file(file_to_upload_path, 
                                                                          file_name)
                print("Upload Successful")
                return True
    
            except FileNotFoundError:
                print("The file was not found")
                return False
    
            except NoCredentialsError:
                print("Credentials not available")
                return False
    
    
    

    There’re three important variables here, the BUCKET const, the file_to_upload and the file_name

    BUCKET: is the name of your S3 bucket

    file_to_upload_path: must be the path from file you want to upload

    file_name: is the resulting file and path in your bucket (this is where you add folders or what ever)

    There’s many ways but you can reuse this code in another script like this

    import S3
    
    def some_function():
        S3.S3().upload_file(path_to_file, final_file_name)
    

    连接到boto3 S3时如何指定凭据?

    问题:连接到boto3 S3时如何指定凭据?

    在boto上,当以这种方式连接到S3时,我通常指定我的凭据:

    import boto
    from boto.s3.connection import Key, S3Connection
    S3 = S3Connection( settings.AWS_SERVER_PUBLIC_KEY, settings.AWS_SERVER_SECRET_KEY )
    

    然后,我可以使用S3执行操作(在我的情况下,从存储桶中删除对象)。

    使用boto3,我发现的所有示例都是这样的:

    import boto3
    S3 = boto3.resource( 's3' )
    S3.Object( bucket_name, key_name ).delete()
    

    我无法指定我的凭据,因此所有尝试均因InvalidAccessKeyId错误而失败。

    如何使用boto3指定凭据?

    On boto I used to specify my credentials when connecting to S3 in such a way:

    import boto
    from boto.s3.connection import Key, S3Connection
    S3 = S3Connection( settings.AWS_SERVER_PUBLIC_KEY, settings.AWS_SERVER_SECRET_KEY )
    

    I could then use S3 to perform my operations (in my case deleting an object from a bucket).

    With boto3 all the examples I found are such:

    import boto3
    S3 = boto3.resource( 's3' )
    S3.Object( bucket_name, key_name ).delete()
    

    I couldn’t specify my credentials and thus all attempts fail with InvalidAccessKeyId error.

    How can I specify credentials with boto3?


    回答 0

    您可以创建一个会话

    import boto3
    session = boto3.Session(
        aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY,
        aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY,
    )
    

    然后使用该会话获取S3资源:

    s3 = session.resource('s3')
    

    You can create a session:

    import boto3
    session = boto3.Session(
        aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY,
        aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY,
    )
    

    Then use that session to get an S3 resource:

    s3 = session.resource('s3')
    

    回答 1

    您可以client像下面这样直接获得一个新会话。

     s3_client = boto3.client('s3', 
                          aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY, 
                          aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY, 
                          region_name=REGION_NAME
                          )
    

    You can get a client with new session directly like below.

     s3_client = boto3.client('s3', 
                          aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY, 
                          aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY, 
                          region_name=REGION_NAME
                          )
    

    回答 2

    这是旧的,但也将其放在这里供我参考。boto3.resource只是实现默认的Session,可以通过boto3.resource会话详细信息。

    Help on function resource in module boto3:
    
    resource(*args, **kwargs)
        Create a resource service client by name using the default session.
    
        See :py:meth:`boto3.session.Session.resource`.
    

    https://github.com/boto/boto3/blob/86392b5ca26da57ce6a776365a52d3cab8487d60/boto3/session.py#L265

    您会看到它只接受与Boto3.Session相同的参数

    import boto3
    S3 = boto3.resource('s3', region_name='us-west-2', aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY, aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY)
    S3.Object( bucket_name, key_name ).delete()
    

    This is older but placing this here for my reference too. boto3.resource is just implementing the default Session, you can pass through boto3.resource session details.

    Help on function resource in module boto3:
    
    resource(*args, **kwargs)
        Create a resource service client by name using the default session.
    
        See :py:meth:`boto3.session.Session.resource`.
    

    https://github.com/boto/boto3/blob/86392b5ca26da57ce6a776365a52d3cab8487d60/boto3/session.py#L265

    you can see that it just takes the same arguments as Boto3.Session

    import boto3
    S3 = boto3.resource('s3', region_name='us-west-2', aws_access_key_id=settings.AWS_SERVER_PUBLIC_KEY, aws_secret_access_key=settings.AWS_SERVER_SECRET_KEY)
    S3.Object( bucket_name, key_name ).delete()
    

    回答 3

    我想扩展@JustAGuy的答案。我更喜欢使用的方法是AWS CLI创建配置文件。原因是使用配置文件时,CLISDK会自动在~/.aws文件夹中查找凭据。好AWS CLI是用python编写的。

    如果还没有,可以从pypi获取cli。这是从终端获取CLI的步骤

    $> pip install awscli  #can add user flag 
    $> aws configure
    AWS Access Key ID [****************ABCD]:[enter your key here]
    AWS Secret Access Key [****************xyz]:[enter your secret key here]
    Default region name [us-west-2]:[enter your region here]
    Default output format [None]:
    

    之后,您boto无需指定键即可访问和任何api(除非您想使用其他凭据)。

    I’d like expand on @JustAGuy’s answer. The method I prefer is to use AWS CLI to create a config file. The reason is, with the config file, the CLI or the SDK will automatically look for credentials in the ~/.aws folder. And the good thing is that AWS CLI is written in python.

    You can get cli from pypi if you don’t have it already. Here are the steps to get cli set up from terminal

    $> pip install awscli  #can add user flag 
    $> aws configure
    AWS Access Key ID [****************ABCD]:[enter your key here]
    AWS Secret Access Key [****************xyz]:[enter your secret key here]
    Default region name [us-west-2]:[enter your region here]
    Default output format [None]:
    

    After this you can access boto and any of the api without having to specify keys (unless you want to use a different credentials).


    回答 4

    在仍然使用boto3.resource()的情况下,有多种存储凭证的方法。我自己在使用AWS CLI方法。完美运作。

    https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html?fbclid=IwAR2LlrS4O2gYH6xAF4QDVIH2Q2tzfF_VZ6loM3XfXsPAOR4qA-pX_qAILys

    There are numerous ways to store credentials while still using boto3.resource(). I’m using the AWS CLI method myself. It works perfectly.

    https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html?fbclid=IwAR2LlrS4O2gYH6xAF4QDVIH2Q2tzfF_VZ6loM3XfXsPAOR4qA-pX_qAILys


    如何使用boto3将文件或数据写入S3对象

    问题:如何使用boto3将文件或数据写入S3对象

    在boto 2中,可以使用以下方法写入S3对象:

    是否有boto 3等效项?将数据保存到S3上存储的对象的boto3方法是什么?

    In boto 2, you can write to an S3 object using these methods:

    Is there a boto 3 equivalent? What is the boto3 method for saving data to an object stored on S3?


    回答 0

    在Boto 3中,“ Key.set_contents_from_”方法被替换为

    例如:

    import boto3
    
    some_binary_data = b'Here we have some data'
    more_binary_data = b'Here we have some more data'
    
    # Method 1: Object.put()
    s3 = boto3.resource('s3')
    object = s3.Object('my_bucket_name', 'my/key/including/filename.txt')
    object.put(Body=some_binary_data)
    
    # Method 2: Client.put_object()
    client = boto3.client('s3')
    client.put_object(Body=more_binary_data, Bucket='my_bucket_name', Key='my/key/including/anotherfilename.txt')

    另外,二进制数据可以来自读取文件,如官方文档中比较boto 2和boto 3所述

    储存资料

    从文件,流或字符串存储数据很容易:

    # Boto 2.x
    from boto.s3.key import Key
    key = Key('hello.txt')
    key.set_contents_from_file('/tmp/hello.txt')
    
    # Boto 3
    s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))

    In boto 3, the ‘Key.set_contents_from_’ methods were replaced by

    For example:

    import boto3
    
    some_binary_data = b'Here we have some data'
    more_binary_data = b'Here we have some more data'
    
    # Method 1: Object.put()
    s3 = boto3.resource('s3')
    object = s3.Object('my_bucket_name', 'my/key/including/filename.txt')
    object.put(Body=some_binary_data)
    
    # Method 2: Client.put_object()
    client = boto3.client('s3')
    client.put_object(Body=more_binary_data, Bucket='my_bucket_name', Key='my/key/including/anotherfilename.txt')
    

    Alternatively, the binary data can come from reading a file, as described in the official docs comparing boto 2 and boto 3:

    Storing Data

    Storing data from a file, stream, or string is easy:

    # Boto 2.x
    from boto.s3.key import Key
    key = Key('hello.txt')
    key.set_contents_from_file('/tmp/hello.txt')
    
    # Boto 3
    s3.Object('mybucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
    

    回答 1

    boto3还有一种直接上传文件的方法:

    s3.Bucket('bucketname').upload_file('/local/file/here.txt','folder/sub/path/to/s3key')

    http://boto3.readthedocs.io/zh_CN/latest/reference/services/s3.html#S3.Bucket.upload_file

    boto3 also has a method for uploading a file directly:

    s3.Bucket('bucketname').upload_file('/local/file/here.txt','folder/sub/path/to/s3key')
    

    http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.upload_file


    回答 2

    在S3中写入文件之前,您不再需要将内容转换为二进制文件。以下示例在具有字符串内容的S3存储桶中创建一个新的文本文件(称为newfile.txt):

    import boto3
    
    s3 = boto3.resource(
        's3',
        region_name='us-east-1',
        aws_access_key_id=KEY_ID,
        aws_secret_access_key=ACCESS_KEY
    )
    content="String content to write to a new S3 file"
    s3.Object('my-bucket-name', 'newfile.txt').put(Body=content)

    You no longer have to convert the contents to binary before writing to the file in S3. The following example creates a new text file (called newfile.txt) in an S3 bucket with string contents:

    import boto3
    
    s3 = boto3.resource(
        's3',
        region_name='us-east-1',
        aws_access_key_id=KEY_ID,
        aws_secret_access_key=ACCESS_KEY
    )
    content="String content to write to a new S3 file"
    s3.Object('my-bucket-name', 'newfile.txt').put(Body=content)
    

    回答 3

    这是一个从s3读取JSON的好技巧:

    import json, boto3
    s3 = boto3.resource("s3").Bucket("bucket")
    json.load_s3 = lambda f: json.load(s3.Object(key=f).get()["Body"])
    json.dump_s3 = lambda obj, f: s3.Object(key=f).put(Body=json.dumps(obj))

    现在你可以使用json.load_s3json.dump_s3使用相同的API loaddump

    data = {"test":0}
    json.dump_s3(data, "key") # saves json to s3://bucket/key
    data = json.load_s3("key") # read json from s3://bucket/key

    Here’s a nice trick to read JSON from s3:

    import json, boto3
    s3 = boto3.resource("s3").Bucket("bucket")
    json.load_s3 = lambda f: json.load(s3.Object(key=f).get()["Body"])
    json.dump_s3 = lambda obj, f: s3.Object(key=f).put(Body=json.dumps(obj))
    

    Now you can use json.load_s3 and json.dump_s3 with the same API as load and dump

    data = {"test":0}
    json.dump_s3(data, "key") # saves json to s3://bucket/key
    data = json.load_s3("key") # read json from s3://bucket/key
    

    回答 4

    简洁明了的版本,可用于将文件动态上传到给定的S3存储桶和子文件夹-

    import boto3
    
    BUCKET_NAME = 'sample_bucket_name'
    PREFIX = 'sub-folder/'
    
    s3 = boto3.resource('s3')
    
    # Creating an empty file called "_DONE" and putting it in the S3 bucket
    s3.Object(BUCKET_NAME, PREFIX + '_DONE').put(Body="")

    注意:您应始终将AWS凭证(aws_access_key_idaws_secret_access_key)放在单独的文件中,例如-~/.aws/credentials

    A cleaner and concise version which I use to upload files on the fly to a given S3 bucket and sub-folder-

    import boto3
    
    BUCKET_NAME = 'sample_bucket_name'
    PREFIX = 'sub-folder/'
    
    s3 = boto3.resource('s3')
    
    # Creating an empty file called "_DONE" and putting it in the S3 bucket
    s3.Object(BUCKET_NAME, PREFIX + '_DONE').put(Body="")
    

    Note: You should ALWAYS put your AWS credentials (aws_access_key_id and aws_secret_access_key) in a separate file, for example- ~/.aws/credentials


    回答 5

    值得一提boto3用作后端的智能开放

    smart-open是一个下拉更换为Python的open,可以从打开的文件s3,以及ftphttp和许多其他协议。

    例如

    from smart_open import open
    import json
    with open("s3://your_bucket/your_key.json", 'r') as f:
        data = json.load(f)

    aws凭证通过boto3凭证(通常是~/.aws/dir中的文件或环境变量)加载。

    it is worth mentioning smart-open that uses boto3 as a back-end.

    smart-open is a drop-in replacement for python’s open that can open files from s3, as well as ftp, http and many other protocols.

    for example

    from smart_open import open
    import json
    with open("s3://your_bucket/your_key.json", 'r') as f:
        data = json.load(f)
    

    The aws credentials are loaded via boto3 credentials, usually a file in the ~/.aws/ dir or an environment variable.


    回答 6

    您可能会使用以下代码进行写操作,例如在2019年将图像写入S3。要连接到S3,您将必须使用command安装AWS CLI pip install awscli,然后使用command 输入少量凭证aws configure

    import urllib3
    import uuid
    from pathlib import Path
    from io import BytesIO
    from errors import custom_exceptions as cex
    
    BUCKET_NAME = "xxx.yyy.zzz"
    POSTERS_BASE_PATH = "assets/wallcontent"
    CLOUDFRONT_BASE_URL = "https://xxx.cloudfront.net/"
    
    
    class S3(object):
        def __init__(self):
            self.client = boto3.client('s3')
            self.bucket_name = BUCKET_NAME
            self.posters_base_path = POSTERS_BASE_PATH
    
        def __download_image(self, url):
            manager = urllib3.PoolManager()
            try:
                res = manager.request('GET', url)
            except Exception:
                print("Could not download the image from URL: ", url)
                raise cex.ImageDownloadFailed
            return BytesIO(res.data)  # any file-like object that implements read()
    
        def upload_image(self, url):
            try:
                image_file = self.__download_image(url)
            except cex.ImageDownloadFailed:
                raise cex.ImageUploadFailed
    
            extension = Path(url).suffix
            id = uuid.uuid1().hex + extension
            final_path = self.posters_base_path + "/" + id
            try:
                self.client.upload_fileobj(image_file,
                                           self.bucket_name,
                                           final_path
                                           )
            except Exception:
                print("Image Upload Error for URL: ", url)
                raise cex.ImageUploadFailed
    
            return CLOUDFRONT_BASE_URL + id

    You may use the below code to write, for example an image to S3 in 2019. To be able to connect to S3 you will have to install AWS CLI using command pip install awscli, then enter few credentials using command aws configure:

    import urllib3
    import uuid
    from pathlib import Path
    from io import BytesIO
    from errors import custom_exceptions as cex
    
    BUCKET_NAME = "xxx.yyy.zzz"
    POSTERS_BASE_PATH = "assets/wallcontent"
    CLOUDFRONT_BASE_URL = "https://xxx.cloudfront.net/"
    
    
    class S3(object):
        def __init__(self):
            self.client = boto3.client('s3')
            self.bucket_name = BUCKET_NAME
            self.posters_base_path = POSTERS_BASE_PATH
    
        def __download_image(self, url):
            manager = urllib3.PoolManager()
            try:
                res = manager.request('GET', url)
            except Exception:
                print("Could not download the image from URL: ", url)
                raise cex.ImageDownloadFailed
            return BytesIO(res.data)  # any file-like object that implements read()
    
        def upload_image(self, url):
            try:
                image_file = self.__download_image(url)
            except cex.ImageDownloadFailed:
                raise cex.ImageUploadFailed
    
            extension = Path(url).suffix
            id = uuid.uuid1().hex + extension
            final_path = self.posters_base_path + "/" + id
            try:
                self.client.upload_fileobj(image_file,
                                           self.bucket_name,
                                           final_path
                                           )
            except Exception:
                print("Image Upload Error for URL: ", url)
                raise cex.ImageUploadFailed
    
            return CLOUDFRONT_BASE_URL + id
    

    AWS boto和boto3有什么区别[关闭]

    问题:AWS boto和boto3有什么区别[关闭]

    我是使用Python的AWS新手,并且正在尝试学习boto API,但是我注意到有两个主要的Python版本/软件包。那将是boto和boto3。

    AWS boto库和boto3库之间有什么区别?

    I’m new to AWS using Python and I’m trying to learn the boto API however I noticed that there are two major versions/packages for Python. That would be boto and boto3.

    What is the difference between the AWS boto and boto3 libraries?


    回答 0

    博托包是手工编写Python库自2006年以来即一直围绕这是非常流行,并通过AWS是完全支持,但因为它是手工编码,有这么多的服务(有更多的出现所有的时间),它很难维护。

    因此,boto3是基于botocore的boto库的新版本。AWS的所有低级接口均由JSON服务描述驱动,而JSON服务描述是根据服务的规范描述自动生成的。因此,界面始终正确且始终是最新的。客户端层之上有一个资源层,它提供了一个更好的,更具Pythonic的界面。

    AWS正在积极开发boto3库,如果人们开始新的开发,我会建议他使用它。

    The boto package is the hand-coded Python library that has been around since 2006. It is very popular and is fully supported by AWS but because it is hand-coded and there are so many services available (with more appearing all the time) it is difficult to maintain.

    So, boto3 is a new version of the boto library based on botocore. All of the low-level interfaces to AWS are driven from JSON service descriptions that are generated automatically from the canonical descriptions of the services. So, the interfaces are always correct and always up to date. There is a resource layer on top of the client-layer that provides a nicer, more Pythonic interface.

    The boto3 library is being actively developed by AWS and is the one I would recommend people use if they are starting new development.


    如何使用boto3处理错误?

    问题:如何使用boto3处理错误?

    我试图弄清楚如何使用boto3进行正确的错误处理。

    我正在尝试创建一个IAM用户:

    def create_user(username, iam_conn):
        try:
            user = iam_conn.create_user(UserName=username)
            return user
        except Exception as e:
            return e

    成功调用create_user时,我得到一个整洁的对象,其中包含API调用的http状态代码和新创建的用户的数据。

    例:

    {'ResponseMetadata': 
          {'HTTPStatusCode': 200, 
           'RequestId': 'omitted'
          },
     u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted',
               u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()),
               u'Path': '/',
               u'UserId': 'omitted',
               u'UserName': 'omitted'
              }
    }

    这很好。但是,如果失败(例如用户已经存在),我只会得到一个类型为botocore.exceptions.ClientError的对象,其中只有文本可以告诉我出了什么问题。

    示例:ClientError(’调用CreateUser操作时发生错误(EntityAlreadyExists):名称省略的用户已经存在。’,)

    此(AFAIK)使得错误处理变得非常困难,因为我不能仅打开结果的http状态代码(根据IAM的AWS API文档,该用户的409已经存在)。这使我认为我必须以错误的方式做某事。最佳的方法是让boto3永远不会抛出异常,但是突出部分总是返回一个反映API调用进行方式的对象。

    谁能在这个问题上启发我或为我指明正确的方向?

    I am trying to figure how to do proper error handling with boto3.

    I am trying to create an IAM user:

    def create_user(username, iam_conn):
        try:
            user = iam_conn.create_user(UserName=username)
            return user
        except Exception as e:
            return e
    

    When the call to create_user succeeds, I get a neat object that contains the http status code of the API call and the data of the newly created user.

    Example:

    {'ResponseMetadata': 
          {'HTTPStatusCode': 200, 
           'RequestId': 'omitted'
          },
     u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted',
               u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()),
               u'Path': '/',
               u'UserId': 'omitted',
               u'UserName': 'omitted'
              }
    }
    

    This works great. But when this fails (like if the user already exists), I just get an object of type botocore.exceptions.ClientError with only text to tell me what went wrong.

    Example: ClientError(‘An error occurred (EntityAlreadyExists) when calling the CreateUser operation: User with name omitted already exists.’,)

    This (AFAIK) makes error handling very hard because I can’t just switch on the resulting http status code (409 for user already exists according to the AWS API docs for IAM). This makes me think that I must be doing something the wrong way. The optimal way would be for boto3 to never throw exceptions, but juts always return an object that reflects how the API call went.

    Can anyone enlighten me on this issue or point me in the right direction?


    回答 0

    使用异常中包含的响应。这是一个例子:

    import boto3
    from botocore.exceptions import ClientError
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except ClientError as e:
        if e.response['Error']['Code'] == 'EntityAlreadyExists':
            print("User already exists")
        else:
            print("Unexpected error: %s" % e)

    异常中的响应字典将包含以下内容:

    • ['Error']['Code'] 例如’EntityAlreadyExists’或’ValidationError’
    • ['ResponseMetadata']['HTTPStatusCode'] 例如400
    • ['ResponseMetadata']['RequestId'] 例如’d2b06652-88d7-11e5-99d0-812348583a35′
    • ['Error']['Message'] 例如:“发生错误(EntityAlreadyExists)…”
    • ['Error']['Type'] 例如“发件人”

    有关更多信息,请参见botocore错误处理

    [更新时间:2018-03-07]

    AWS Python SDK已开始在您可以显式捕获的客户端上公开服务异常(尽管不是在resource上),因此现在可以编写如下代码:

    import boto3
    from botocore.exceptions import ClientError, ParamValidationError
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except iam.exceptions.EntityAlreadyExistsException:
        print("User already exists")
    except ParamValidationError as e:
        print("Parameter validation error: %s" % e)
    except ClientError as e:
        print("Unexpected error: %s" % e)

    不幸的是,目前没有关于这些异常的文档。

    Use the response contained within the exception. Here is an example:

    import boto3
    from botocore.exceptions import ClientError
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except ClientError as e:
        if e.response['Error']['Code'] == 'EntityAlreadyExists':
            print("User already exists")
        else:
            print("Unexpected error: %s" % e)
    

    The response dict in the exception will contain the following:

    • ['Error']['Code'] e.g. ‘EntityAlreadyExists’ or ‘ValidationError’
    • ['ResponseMetadata']['HTTPStatusCode'] e.g. 400
    • ['ResponseMetadata']['RequestId'] e.g. ‘d2b06652-88d7-11e5-99d0-812348583a35’
    • ['Error']['Message'] e.g. “An error occurred (EntityAlreadyExists) …”
    • ['Error']['Type'] e.g. ‘Sender’

    For more information see:

    [Updated: 2018-03-07]

    The AWS Python SDK has begun to expose service exceptions on clients (though not on resources) that you can explicitly catch, so it is now possible to write that code like this:

    import botocore
    import boto3
    
    try:
        iam = boto3.client('iam')
        user = iam.create_user(UserName='fred')
        print("Created user: %s" % user)
    except iam.exceptions.EntityAlreadyExistsException:
        print("User already exists")
    except botocore.exceptions.ParamValidationError as e:
        print("Parameter validation error: %s" % e)
    except botocore.exceptions.ClientError as e:
        print("Unexpected error: %s" % e)
    

    Unfortunately, there is currently no documentation for these exceptions but you can get a list of them as follows:

    import botocore
    import boto3
    dir(botocore.exceptions)
    

    Note that you must import both botocore and boto3. If you only import botocore then you will find that botocore has no attribute named exceptions. This is because the exceptions are dynamically populated into botocore by boto3.


    回答 1

    我发现这非常有用,因为没有记录exceptions,因此将所有exceptions列出到此软件包的屏幕上。这是我用来做的代码:

    import botocore.exceptions
    def listexns(mod):
        #module = __import__(mod)
        exns = []
        for name in botocore.exceptions.__dict__:
            if (isinstance(botocore.exceptions.__dict__[name], Exception) or
                name.endswith('Error')):
                exns.append(name)
        for name in exns:
            print('%s.%s is an exception type' % (str(mod), name))
        return
    
    if __name__ == '__main__':
        import sys
        if len(sys.argv) <= 1:
            print('Give me a module name on the $PYTHONPATH!')
        print('Looking for exception types in module: %s' % sys.argv[1])
        listexns(sys.argv[1])

    结果是:

    Looking for exception types in module: boto3
    boto3.BotoCoreError is an exception type
    boto3.DataNotFoundError is an exception type
    boto3.UnknownServiceError is an exception type
    boto3.ApiVersionNotFoundError is an exception type
    boto3.HTTPClientError is an exception type
    boto3.ConnectionError is an exception type
    boto3.EndpointConnectionError is an exception type
    boto3.SSLError is an exception type
    boto3.ConnectionClosedError is an exception type
    boto3.ReadTimeoutError is an exception type
    boto3.ConnectTimeoutError is an exception type
    boto3.ProxyConnectionError is an exception type
    boto3.NoCredentialsError is an exception type
    boto3.PartialCredentialsError is an exception type
    boto3.CredentialRetrievalError is an exception type
    boto3.UnknownSignatureVersionError is an exception type
    boto3.ServiceNotInRegionError is an exception type
    boto3.BaseEndpointResolverError is an exception type
    boto3.NoRegionError is an exception type
    boto3.UnknownEndpointError is an exception type
    boto3.ConfigParseError is an exception type
    boto3.MissingParametersError is an exception type
    boto3.ValidationError is an exception type
    boto3.ParamValidationError is an exception type
    boto3.UnknownKeyError is an exception type
    boto3.RangeError is an exception type
    boto3.UnknownParameterError is an exception type
    boto3.AliasConflictParameterError is an exception type
    boto3.PaginationError is an exception type
    boto3.OperationNotPageableError is an exception type
    boto3.ChecksumError is an exception type
    boto3.UnseekableStreamError is an exception type
    boto3.WaiterError is an exception type
    boto3.IncompleteReadError is an exception type
    boto3.InvalidExpressionError is an exception type
    boto3.UnknownCredentialError is an exception type
    boto3.WaiterConfigError is an exception type
    boto3.UnknownClientMethodError is an exception type
    boto3.UnsupportedSignatureVersionError is an exception type
    boto3.ClientError is an exception type
    boto3.EventStreamError is an exception type
    boto3.InvalidDNSNameError is an exception type
    boto3.InvalidS3AddressingStyleError is an exception type
    boto3.InvalidRetryConfigurationError is an exception type
    boto3.InvalidMaxRetryAttemptsError is an exception type
    boto3.StubResponseError is an exception type
    boto3.StubAssertionError is an exception type
    boto3.UnStubbedResponseError is an exception type
    boto3.InvalidConfigError is an exception type
    boto3.InfiniteLoopConfigError is an exception type
    boto3.RefreshWithMFAUnsupportedError is an exception type
    boto3.MD5UnavailableError is an exception type
    boto3.MetadataRetrievalError is an exception type
    boto3.UndefinedModelAttributeError is an exception type
    boto3.MissingServiceIdError is an exception type

    I found it very useful, since the Exceptions are not documented, to list all exceptions to the screen for this package. Here is the code I used to do it:

    import botocore.exceptions
    def listexns(mod):
        #module = __import__(mod)
        exns = []
        for name in botocore.exceptions.__dict__:
            if (isinstance(botocore.exceptions.__dict__[name], Exception) or
                name.endswith('Error')):
                exns.append(name)
        for name in exns:
            print('%s.%s is an exception type' % (str(mod), name))
        return
    
    if __name__ == '__main__':
        import sys
        if len(sys.argv) <= 1:
            print('Give me a module name on the $PYTHONPATH!')
        print('Looking for exception types in module: %s' % sys.argv[1])
        listexns(sys.argv[1])
    

    Which results in:

    Looking for exception types in module: boto3
    boto3.BotoCoreError is an exception type
    boto3.DataNotFoundError is an exception type
    boto3.UnknownServiceError is an exception type
    boto3.ApiVersionNotFoundError is an exception type
    boto3.HTTPClientError is an exception type
    boto3.ConnectionError is an exception type
    boto3.EndpointConnectionError is an exception type
    boto3.SSLError is an exception type
    boto3.ConnectionClosedError is an exception type
    boto3.ReadTimeoutError is an exception type
    boto3.ConnectTimeoutError is an exception type
    boto3.ProxyConnectionError is an exception type
    boto3.NoCredentialsError is an exception type
    boto3.PartialCredentialsError is an exception type
    boto3.CredentialRetrievalError is an exception type
    boto3.UnknownSignatureVersionError is an exception type
    boto3.ServiceNotInRegionError is an exception type
    boto3.BaseEndpointResolverError is an exception type
    boto3.NoRegionError is an exception type
    boto3.UnknownEndpointError is an exception type
    boto3.ConfigParseError is an exception type
    boto3.MissingParametersError is an exception type
    boto3.ValidationError is an exception type
    boto3.ParamValidationError is an exception type
    boto3.UnknownKeyError is an exception type
    boto3.RangeError is an exception type
    boto3.UnknownParameterError is an exception type
    boto3.AliasConflictParameterError is an exception type
    boto3.PaginationError is an exception type
    boto3.OperationNotPageableError is an exception type
    boto3.ChecksumError is an exception type
    boto3.UnseekableStreamError is an exception type
    boto3.WaiterError is an exception type
    boto3.IncompleteReadError is an exception type
    boto3.InvalidExpressionError is an exception type
    boto3.UnknownCredentialError is an exception type
    boto3.WaiterConfigError is an exception type
    boto3.UnknownClientMethodError is an exception type
    boto3.UnsupportedSignatureVersionError is an exception type
    boto3.ClientError is an exception type
    boto3.EventStreamError is an exception type
    boto3.InvalidDNSNameError is an exception type
    boto3.InvalidS3AddressingStyleError is an exception type
    boto3.InvalidRetryConfigurationError is an exception type
    boto3.InvalidMaxRetryAttemptsError is an exception type
    boto3.StubResponseError is an exception type
    boto3.StubAssertionError is an exception type
    boto3.UnStubbedResponseError is an exception type
    boto3.InvalidConfigError is an exception type
    boto3.InfiniteLoopConfigError is an exception type
    boto3.RefreshWithMFAUnsupportedError is an exception type
    boto3.MD5UnavailableError is an exception type
    boto3.MetadataRetrievalError is an exception type
    boto3.UndefinedModelAttributeError is an exception type
    boto3.MissingServiceIdError is an exception type
    

    回答 2

    只是对@jarmod所指出的“资源上无exceptions”问题的更新(如果以下内容适用,请随时更新您的答案)

    我已经测试了以下代码,并且运行正常。它采用“资源”为的事情,但抓住了client.exceptions-当看着在异常时使用调试器,虽然它看起来“有点错……它测试好,异常类都出现了和匹配…

    它可能不适用于所有资源和客户端,但适用于数据文件夹(也称为s3存储桶)。

    lab_session = boto3.Session() 
    c = lab_session.client('s3') #this client is only for exception catching
    
    try:
        b = s3.Bucket(bucket)
        b.delete()
    except c.exceptions.NoSuchBucket as e:
        #ignoring no such bucket exceptions
        logger.debug("Failed deleting bucket. Continuing. {}".format(e))
    except Exception as e:
        #logging all the others as warning
        logger.warning("Failed deleting bucket. Continuing. {}".format(e))

    希望这可以帮助…

    Just an update to the ‘no exceptions on resources’ problem as pointed to by @jarmod (do please feel free to update your answer if below seems applicable)

    I have tested the below code and it runs fine. It uses ‘resources’ for doing things, but catches the client.exceptions – although it ‘looks’ somewhat wrong… it tests good, the exception classes are showing and matching when looked into using debugger at exception time…

    It may not be applicable to all resources and clients, but works for data folders (aka s3 buckets).

    lab_session = boto3.Session() 
    c = lab_session.client('s3') #this client is only for exception catching
    
    try:
        b = s3.Bucket(bucket)
        b.delete()
    except c.exceptions.NoSuchBucket as e:
        #ignoring no such bucket exceptions
        logger.debug("Failed deleting bucket. Continuing. {}".format(e))
    except Exception as e:
        #logging all the others as warning
        logger.warning("Failed deleting bucket. Continuing. {}".format(e))
    

    Hope this helps…


    回答 3

    如前所述,您可以使用服务客户端(service_client.exceptions.<ExceptionClass>)或资源(service_resource.meta.client.exceptions.<ExceptionClass>)捕获某些错误,但是记录不充分(还有哪些异常属于哪些客户端)。因此,在撰写本文时(2020年1月),将在EU(爱尔兰)(eu-west-1)地区获得完整的映射:

    import boto3, pprint
    
    region_name = 'eu-west-1'
    session = boto3.Session(region_name=region_name)
    exceptions = {
      service: list(boto3.client('sts').exceptions._code_to_exception)
      for service in session.get_available_services()
    }
    pprint.pprint(exceptions, width=20000)

    这是相当大的文档的子集:

    {'acm': ['InvalidArnException', 'InvalidDomainValidationOptionsException', 'InvalidStateException', 'InvalidTagException', 'LimitExceededException', 'RequestInProgressException', 'ResourceInUseException', 'ResourceNotFoundException', 'TooManyTagsException'],
     'apigateway': ['BadRequestException', 'ConflictException', 'LimitExceededException', 'NotFoundException', 'ServiceUnavailableException', 'TooManyRequestsException', 'UnauthorizedException'],
     'athena': ['InternalServerException', 'InvalidRequestException', 'TooManyRequestsException'],
     'autoscaling': ['AlreadyExists', 'InvalidNextToken', 'LimitExceeded', 'ResourceContention', 'ResourceInUse', 'ScalingActivityInProgress', 'ServiceLinkedRoleFailure'],
     'cloudformation': ['AlreadyExistsException', 'ChangeSetNotFound', 'CreatedButModifiedException', 'InsufficientCapabilitiesException', 'InvalidChangeSetStatus', 'InvalidOperationException', 'LimitExceededException', 'NameAlreadyExistsException', 'OperationIdAlreadyExistsException', 'OperationInProgressException', 'OperationNotFoundException', 'StackInstanceNotFoundException', 'StackSetNotEmptyException', 'StackSetNotFoundException', 'StaleRequestException', 'TokenAlreadyExistsException'],
     'cloudfront': ['AccessDenied', 'BatchTooLarge', 'CNAMEAlreadyExists', 'CannotChangeImmutablePublicKeyFields', 'CloudFrontOriginAccessIdentityAlreadyExists', 'CloudFrontOriginAccessIdentityInUse', 'DistributionAlreadyExists', 'DistributionNotDisabled', 'FieldLevelEncryptionConfigAlreadyExists', 'FieldLevelEncryptionConfigInUse', 'FieldLevelEncryptionProfileAlreadyExists', 'FieldLevelEncryptionProfileInUse', 'FieldLevelEncryptionProfileSizeExceeded', 'IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior', 'IllegalUpdate', 'InconsistentQuantities', 'InvalidArgument', 'InvalidDefaultRootObject', 'InvalidErrorCode', 'InvalidForwardCookies', 'InvalidGeoRestrictionParameter', 'InvalidHeadersForS3Origin', 'InvalidIfMatchVersion', 'InvalidLambdaFunctionAssociation', 'InvalidLocationCode', 'InvalidMinimumProtocolVersion', 'InvalidOrigin', 'InvalidOriginAccessIdentity', 'InvalidOriginKeepaliveTimeout', 'InvalidOriginReadTimeout', 'InvalidProtocolSettings', 'InvalidQueryStringParameters', 'InvalidRelativePath', 'InvalidRequiredProtocol', 'InvalidResponseCode', 'InvalidTTLOrder', 'InvalidTagging', 'InvalidViewerCertificate', 'InvalidWebACLId', 'MissingBody', 'NoSuchCloudFrontOriginAccessIdentity', 'NoSuchDistribution', 'NoSuchFieldLevelEncryptionConfig', 'NoSuchFieldLevelEncryptionProfile', 'NoSuchInvalidation', 'NoSuchOrigin', 'NoSuchPublicKey', 'NoSuchResource', 'NoSuchStreamingDistribution', 'PreconditionFailed', 'PublicKeyAlreadyExists', 'PublicKeyInUse', 'QueryArgProfileEmpty', 'StreamingDistributionAlreadyExists', 'StreamingDistributionNotDisabled', 'TooManyCacheBehaviors', 'TooManyCertificates', 'TooManyCloudFrontOriginAccessIdentities', 'TooManyCookieNamesInWhiteList', 'TooManyDistributionCNAMEs', 'TooManyDistributions', 'TooManyDistributionsAssociatedToFieldLevelEncryptionConfig', 'TooManyDistributionsWithLambdaAssociations', 'TooManyFieldLevelEncryptionConfigs', 'TooManyFieldLevelEncryptionContentTypeProfiles', 'TooManyFieldLevelEncryptionEncryptionEntities', 'TooManyFieldLevelEncryptionFieldPatterns', 'TooManyFieldLevelEncryptionProfiles', 'TooManyFieldLevelEncryptionQueryArgProfiles', 'TooManyHeadersInForwardedValues', 'TooManyInvalidationsInProgress', 'TooManyLambdaFunctionAssociations', 'TooManyOriginCustomHeaders', 'TooManyOriginGroupsPerDistribution', 'TooManyOrigins', 'TooManyPublicKeys', 'TooManyQueryStringParameters', 'TooManyStreamingDistributionCNAMEs', 'TooManyStreamingDistributions', 'TooManyTrustedSigners', 'TrustedSignerDoesNotExist'],
     'cloudtrail': ['CloudTrailARNInvalidException', 'CloudTrailAccessNotEnabledException', 'CloudWatchLogsDeliveryUnavailableException', 'InsufficientDependencyServiceAccessPermissionException', 'InsufficientEncryptionPolicyException', 'InsufficientS3BucketPolicyException', 'InsufficientSnsTopicPolicyException', 'InvalidCloudWatchLogsLogGroupArnException', 'InvalidCloudWatchLogsRoleArnException', 'InvalidEventSelectorsException', 'InvalidHomeRegionException', 'InvalidKmsKeyIdException', 'InvalidLookupAttributesException', 'InvalidMaxResultsException', 'InvalidNextTokenException', 'InvalidParameterCombinationException', 'InvalidS3BucketNameException', 'InvalidS3PrefixException', 'InvalidSnsTopicNameException', 'InvalidTagParameterException', 'InvalidTimeRangeException', 'InvalidTokenException', 'InvalidTrailNameException', 'KmsException', 'KmsKeyDisabledException', 'KmsKeyNotFoundException', 'MaximumNumberOfTrailsExceededException', 'NotOrganizationMasterAccountException', 'OperationNotPermittedException', 'OrganizationNotInAllFeaturesModeException', 'OrganizationsNotInUseException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3BucketDoesNotExistException', 'TagsLimitExceededException', 'TrailAlreadyExistsException', 'TrailNotFoundException', 'TrailNotProvidedException', 'UnsupportedOperationException'],
     'cloudwatch': ['InvalidParameterInput', 'ResourceNotFound', 'InternalServiceError', 'InvalidFormat', 'InvalidNextToken', 'InvalidParameterCombination', 'InvalidParameterValue', 'LimitExceeded', 'MissingParameter'],
     'codebuild': ['AccountLimitExceededException', 'InvalidInputException', 'OAuthProviderException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException'],
     'config': ['InsufficientDeliveryPolicyException', 'InsufficientPermissionsException', 'InvalidConfigurationRecorderNameException', 'InvalidDeliveryChannelNameException', 'InvalidLimitException', 'InvalidNextTokenException', 'InvalidParameterValueException', 'InvalidRecordingGroupException', 'InvalidResultTokenException', 'InvalidRoleException', 'InvalidS3KeyPrefixException', 'InvalidSNSTopicARNException', 'InvalidTimeRangeException', 'LastDeliveryChannelDeleteFailedException', 'LimitExceededException', 'MaxNumberOfConfigRulesExceededException', 'MaxNumberOfConfigurationRecordersExceededException', 'MaxNumberOfDeliveryChannelsExceededException', 'MaxNumberOfRetentionConfigurationsExceededException', 'NoAvailableConfigurationRecorderException', 'NoAvailableDeliveryChannelException', 'NoAvailableOrganizationException', 'NoRunningConfigurationRecorderException', 'NoSuchBucketException', 'NoSuchConfigRuleException', 'NoSuchConfigurationAggregatorException', 'NoSuchConfigurationRecorderException', 'NoSuchDeliveryChannelException', 'NoSuchRetentionConfigurationException', 'OrganizationAccessDeniedException', 'OrganizationAllFeaturesNotEnabledException', 'OversizedConfigurationItemException', 'ResourceInUseException', 'ResourceNotDiscoveredException', 'ValidationException'],
     'dynamodb': ['BackupInUseException', 'BackupNotFoundException', 'ConditionalCheckFailedException', 'ContinuousBackupsUnavailableException', 'GlobalTableAlreadyExistsException', 'GlobalTableNotFoundException', 'IdempotentParameterMismatchException', 'IndexNotFoundException', 'InternalServerError', 'InvalidRestoreTimeException', 'ItemCollectionSizeLimitExceededException', 'LimitExceededException', 'PointInTimeRecoveryUnavailableException', 'ProvisionedThroughputExceededException', 'ReplicaAlreadyExistsException', 'ReplicaNotFoundException', 'RequestLimitExceeded', 'ResourceInUseException', 'ResourceNotFoundException', 'TableAlreadyExistsException', 'TableInUseException', 'TableNotFoundException', 'TransactionCanceledException', 'TransactionConflictException', 'TransactionInProgressException'],
     'ec2': [],
     'ecr': ['EmptyUploadException', 'ImageAlreadyExistsException', 'ImageNotFoundException', 'InvalidLayerException', 'InvalidLayerPartException', 'InvalidParameterException', 'InvalidTagParameterException', 'LayerAlreadyExistsException', 'LayerInaccessibleException', 'LayerPartTooSmallException', 'LayersNotFoundException', 'LifecyclePolicyNotFoundException', 'LifecyclePolicyPreviewInProgressException', 'LifecyclePolicyPreviewNotFoundException', 'LimitExceededException', 'RepositoryAlreadyExistsException', 'RepositoryNotEmptyException', 'RepositoryNotFoundException', 'RepositoryPolicyNotFoundException', 'ServerException', 'TooManyTagsException', 'UploadNotFoundException'],
     'ecs': ['AccessDeniedException', 'AttributeLimitExceededException', 'BlockedException', 'ClientException', 'ClusterContainsContainerInstancesException', 'ClusterContainsServicesException', 'ClusterContainsTasksException', 'ClusterNotFoundException', 'InvalidParameterException', 'MissingVersionException', 'NoUpdateAvailableException', 'PlatformTaskDefinitionIncompatibilityException', 'PlatformUnknownException', 'ResourceNotFoundException', 'ServerException', 'ServiceNotActiveException', 'ServiceNotFoundException', 'TargetNotFoundException', 'UnsupportedFeatureException', 'UpdateInProgressException'],
     'efs': ['BadRequest', 'DependencyTimeout', 'FileSystemAlreadyExists', 'FileSystemInUse', 'FileSystemLimitExceeded', 'FileSystemNotFound', 'IncorrectFileSystemLifeCycleState', 'IncorrectMountTargetState', 'InsufficientThroughputCapacity', 'InternalServerError', 'IpAddressInUse', 'MountTargetConflict', 'MountTargetNotFound', 'NetworkInterfaceLimitExceeded', 'NoFreeAddressesInSubnet', 'SecurityGroupLimitExceeded', 'SecurityGroupNotFound', 'SubnetNotFound', 'ThroughputLimitExceeded', 'TooManyRequests', 'UnsupportedAvailabilityZone'],
     'eks': ['ClientException', 'InvalidParameterException', 'InvalidRequestException', 'ResourceInUseException', 'ResourceLimitExceededException', 'ResourceNotFoundException', 'ServerException', 'ServiceUnavailableException', 'UnsupportedAvailabilityZoneException'],
     'elasticache': ['APICallRateForCustomerExceeded', 'AuthorizationAlreadyExists', 'AuthorizationNotFound', 'CacheClusterAlreadyExists', 'CacheClusterNotFound', 'CacheParameterGroupAlreadyExists', 'CacheParameterGroupNotFound', 'CacheParameterGroupQuotaExceeded', 'CacheSecurityGroupAlreadyExists', 'CacheSecurityGroupNotFound', 'QuotaExceeded.CacheSecurityGroup', 'CacheSubnetGroupAlreadyExists', 'CacheSubnetGroupInUse', 'CacheSubnetGroupNotFoundFault', 'CacheSubnetGroupQuotaExceeded', 'CacheSubnetQuotaExceededFault', 'ClusterQuotaForCustomerExceeded', 'InsufficientCacheClusterCapacity', 'InvalidARN', 'InvalidCacheClusterState', 'InvalidCacheParameterGroupState', 'InvalidCacheSecurityGroupState', 'InvalidParameterCombination', 'InvalidParameterValue', 'InvalidReplicationGroupState', 'InvalidSnapshotState', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'NoOperationFault', 'NodeGroupNotFoundFault', 'NodeGroupsPerReplicationGroupQuotaExceeded', 'NodeQuotaForClusterExceeded', 'NodeQuotaForCustomerExceeded', 'ReplicationGroupAlreadyExists', 'ReplicationGroupNotFoundFault', 'ReservedCacheNodeAlreadyExists', 'ReservedCacheNodeNotFound', 'ReservedCacheNodeQuotaExceeded', 'ReservedCacheNodesOfferingNotFound', 'ServiceLinkedRoleNotFoundFault', 'SnapshotAlreadyExistsFault', 'SnapshotFeatureNotSupportedFault', 'SnapshotNotFoundFault', 'SnapshotQuotaExceededFault', 'SubnetInUse', 'TagNotFound', 'TagQuotaPerResourceExceeded', 'TestFailoverNotAvailableFault'],
     'elasticbeanstalk': ['CodeBuildNotInServiceRegionException', 'ElasticBeanstalkServiceException', 'InsufficientPrivilegesException', 'InvalidRequestException', 'ManagedActionInvalidStateException', 'OperationInProgressFailure', 'PlatformVersionStillReferencedException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3LocationNotInServiceRegionException', 'S3SubscriptionRequiredException', 'SourceBundleDeletionFailure', 'TooManyApplicationVersionsException', 'TooManyApplicationsException', 'TooManyBucketsException', 'TooManyConfigurationTemplatesException', 'TooManyEnvironmentsException', 'TooManyPlatformsException', 'TooManyTagsException'],
     'elb': ['LoadBalancerNotFound', 'CertificateNotFound', 'DependencyThrottle', 'DuplicateLoadBalancerName', 'DuplicateListener', 'DuplicatePolicyName', 'DuplicateTagKeys', 'InvalidConfigurationRequest', 'InvalidInstance', 'InvalidScheme', 'InvalidSecurityGroup', 'InvalidSubnet', 'ListenerNotFound', 'LoadBalancerAttributeNotFound', 'OperationNotPermitted', 'PolicyNotFound', 'PolicyTypeNotFound', 'SubnetNotFound', 'TooManyLoadBalancers', 'TooManyPolicies', 'TooManyTags', 'UnsupportedProtocol'],
     'emr': ['InternalServerError', 'InternalServerException', 'InvalidRequestException'],
     'es': ['BaseException', 'DisabledOperationException', 'InternalException', 'InvalidTypeException', 'LimitExceededException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ValidationException'],
     'events': ['ConcurrentModificationException', 'InternalException', 'InvalidEventPatternException', 'LimitExceededException', 'ManagedRuleException', 'PolicyLengthExceededException', 'ResourceNotFoundException'],
     'firehose': ['ConcurrentModificationException', 'InvalidArgumentException', 'LimitExceededException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glacier': ['InsufficientCapacityException', 'InvalidParameterValueException', 'LimitExceededException', 'MissingParameterValueException', 'PolicyEnforcedException', 'RequestTimeoutException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glue': ['AccessDeniedException', 'AlreadyExistsException', 'ConcurrentModificationException', 'ConcurrentRunsExceededException', 'ConditionCheckFailureException', 'CrawlerNotRunningException', 'CrawlerRunningException', 'CrawlerStoppingException', 'EntityNotFoundException', 'GlueEncryptionException', 'IdempotentParameterMismatchException', 'InternalServiceException', 'InvalidInputException', 'NoScheduleException', 'OperationTimeoutException', 'ResourceNumberLimitExceededException', 'SchedulerNotRunningException', 'SchedulerRunningException', 'SchedulerTransitioningException', 'ValidationException', 'VersionMismatchException'],
     'iam': ['ConcurrentModification', 'ReportExpired', 'ReportNotPresent', 'ReportInProgress', 'DeleteConflict', 'DuplicateCertificate', 'DuplicateSSHPublicKey', 'EntityAlreadyExists', 'EntityTemporarilyUnmodifiable', 'InvalidAuthenticationCode', 'InvalidCertificate', 'InvalidInput', 'InvalidPublicKey', 'InvalidUserType', 'KeyPairMismatch', 'LimitExceeded', 'MalformedCertificate', 'MalformedPolicyDocument', 'NoSuchEntity', 'PasswordPolicyViolation', 'PolicyEvaluation', 'PolicyNotAttachable', 'ServiceFailure', 'NotSupportedService', 'UnmodifiableEntity', 'UnrecognizedPublicKeyEncoding'],
     'kinesis': ['ExpiredIteratorException', 'ExpiredNextTokenException', 'InternalFailureException', 'InvalidArgumentException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'KMSOptInRequired', 'KMSThrottlingException', 'LimitExceededException', 'ProvisionedThroughputExceededException', 'ResourceInUseException', 'ResourceNotFoundException'],
     'kms': ['AlreadyExistsException', 'CloudHsmClusterInUseException', 'CloudHsmClusterInvalidConfigurationException', 'CloudHsmClusterNotActiveException', 'CloudHsmClusterNotFoundException', 'CloudHsmClusterNotRelatedException', 'CustomKeyStoreHasCMKsException', 'CustomKeyStoreInvalidStateException', 'CustomKeyStoreNameInUseException', 'CustomKeyStoreNotFoundException', 'DependencyTimeoutException', 'DisabledException', 'ExpiredImportTokenException', 'IncorrectKeyMaterialException', 'IncorrectTrustAnchorException', 'InvalidAliasNameException', 'InvalidArnException', 'InvalidCiphertextException', 'InvalidGrantIdException', 'InvalidGrantTokenException', 'InvalidImportTokenException', 'InvalidKeyUsageException', 'InvalidMarkerException', 'KMSInternalException', 'KMSInvalidStateException', 'KeyUnavailableException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'NotFoundException', 'TagException', 'UnsupportedOperationException'],
     'lambda': ['CodeStorageExceededException', 'EC2AccessDeniedException', 'EC2ThrottledException', 'EC2UnexpectedException', 'ENILimitReachedException', 'InvalidParameterValueException', 'InvalidRequestContentException', 'InvalidRuntimeException', 'InvalidSecurityGroupIDException', 'InvalidSubnetIDException', 'InvalidZipFileException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'PolicyLengthExceededException', 'PreconditionFailedException', 'RequestTooLargeException', 'ResourceConflictException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceException', 'SubnetIPAddressLimitReachedException', 'TooManyRequestsException', 'UnsupportedMediaTypeException'],
     'logs': ['DataAlreadyAcceptedException', 'InvalidOperationException', 'InvalidParameterException', 'InvalidSequenceTokenException', 'LimitExceededException', 'MalformedQueryException', 'OperationAbortedException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ServiceUnavailableException', 'UnrecognizedClientException'],
     'neptune': ['AuthorizationNotFound', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceNotFound', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupNotFound', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidRestoreFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupNotFoundFault', 'ProvisionedIopsNotAvailableInAZFault', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'rds': ['AuthorizationAlreadyExists', 'AuthorizationNotFound', 'AuthorizationQuotaExceeded', 'BackupPolicyNotFoundFault', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterBacktrackNotFoundFault', 'DBClusterEndpointAlreadyExistsFault', 'DBClusterEndpointNotFoundFault', 'DBClusterEndpointQuotaExceededFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceAutomatedBackupNotFound', 'DBInstanceAutomatedBackupQuotaExceeded', 'DBInstanceNotFound', 'DBInstanceRoleAlreadyExists', 'DBInstanceRoleNotFound', 'DBInstanceRoleQuotaExceeded', 'DBLogFileNotFoundFault', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupAlreadyExists', 'DBSecurityGroupNotFound', 'DBSecurityGroupNotSupported', 'QuotaExceeded.DBSecurityGroup', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotAllowedFault', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'GlobalClusterAlreadyExistsFault', 'GlobalClusterNotFoundFault', 'GlobalClusterQuotaExceededFault', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterCapacityFault', 'InvalidDBClusterEndpointStateFault', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceAutomatedBackupState', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupFault', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidGlobalClusterStateFault', 'InvalidOptionGroupStateFault', 'InvalidRestoreFault', 'InvalidS3BucketFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupAlreadyExistsFault', 'OptionGroupNotFoundFault', 'OptionGroupQuotaExceededFault', 'PointInTimeRestoreNotEnabled', 'ProvisionedIopsNotAvailableInAZFault', 'ReservedDBInstanceAlreadyExists', 'ReservedDBInstanceNotFound', 'ReservedDBInstanceQuotaExceeded', 'ReservedDBInstancesOfferingNotFound', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'route53': ['ConcurrentModification', 'ConflictingDomainExists', 'ConflictingTypes', 'DelegationSetAlreadyCreated', 'DelegationSetAlreadyReusable', 'DelegationSetInUse', 'DelegationSetNotAvailable', 'DelegationSetNotReusable', 'HealthCheckAlreadyExists', 'HealthCheckInUse', 'HealthCheckVersionMismatch', 'HostedZoneAlreadyExists', 'HostedZoneNotEmpty', 'HostedZoneNotFound', 'HostedZoneNotPrivate', 'IncompatibleVersion', 'InsufficientCloudWatchLogsResourcePolicy', 'InvalidArgument', 'InvalidChangeBatch', 'InvalidDomainName', 'InvalidInput', 'InvalidPaginationToken', 'InvalidTrafficPolicyDocument', 'InvalidVPCId', 'LastVPCAssociation', 'LimitsExceeded', 'NoSuchChange', 'NoSuchCloudWatchLogsLogGroup', 'NoSuchDelegationSet', 'NoSuchGeoLocation', 'NoSuchHealthCheck', 'NoSuchHostedZone', 'NoSuchQueryLoggingConfig', 'NoSuchTrafficPolicy', 'NoSuchTrafficPolicyInstance', 'NotAuthorizedException', 'PriorRequestNotComplete', 'PublicZoneVPCAssociation', 'QueryLoggingConfigAlreadyExists', 'ThrottlingException', 'TooManyHealthChecks', 'TooManyHostedZones', 'TooManyTrafficPolicies', 'TooManyTrafficPolicyInstances', 'TooManyTrafficPolicyVersionsForCurrentPolicy', 'TooManyVPCAssociationAuthorizations', 'TrafficPolicyAlreadyExists', 'TrafficPolicyInUse', 'TrafficPolicyInstanceAlreadyExists', 'VPCAssociationAuthorizationNotFound', 'VPCAssociationNotFound'],
     's3': ['BucketAlreadyExists', 'BucketAlreadyOwnedByYou', 'NoSuchBucket', 'NoSuchKey', 'NoSuchUpload', 'ObjectAlreadyInActiveTierError', 'ObjectNotInActiveTierError'],
     'sagemaker': ['ResourceInUse', 'ResourceLimitExceeded', 'ResourceNotFound'],
     'secretsmanager': ['DecryptionFailure', 'EncryptionFailure', 'InternalServiceError', 'InvalidNextTokenException', 'InvalidParameterException', 'InvalidRequestException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'PreconditionNotMetException', 'ResourceExistsException', 'ResourceNotFoundException'],
     'ses': ['AccountSendingPausedException', 'AlreadyExists', 'CannotDelete', 'ConfigurationSetAlreadyExists', 'ConfigurationSetDoesNotExist', 'ConfigurationSetSendingPausedException', 'CustomVerificationEmailInvalidContent', 'CustomVerificationEmailTemplateAlreadyExists', 'CustomVerificationEmailTemplateDoesNotExist', 'EventDestinationAlreadyExists', 'EventDestinationDoesNotExist', 'FromEmailAddressNotVerified', 'InvalidCloudWatchDestination', 'InvalidConfigurationSet', 'InvalidFirehoseDestination', 'InvalidLambdaFunction', 'InvalidPolicy', 'InvalidRenderingParameter', 'InvalidS3Configuration', 'InvalidSNSDestination', 'InvalidSnsTopic', 'InvalidTemplate', 'InvalidTrackingOptions', 'LimitExceeded', 'MailFromDomainNotVerifiedException', 'MessageRejected', 'MissingRenderingAttribute', 'ProductionAccessNotGranted', 'RuleDoesNotExist', 'RuleSetDoesNotExist', 'TemplateDoesNotExist', 'TrackingOptionsAlreadyExistsException', 'TrackingOptionsDoesNotExistException'],
     'sns': ['AuthorizationError', 'EndpointDisabled', 'FilterPolicyLimitExceeded', 'InternalError', 'InvalidParameter', 'ParameterValueInvalid', 'InvalidSecurity', 'KMSAccessDenied', 'KMSDisabled', 'KMSInvalidState', 'KMSNotFound', 'KMSOptInRequired', 'KMSThrottling', 'NotFound', 'PlatformApplicationDisabled', 'SubscriptionLimitExceeded', 'Throttled', 'TopicLimitExceeded'],
     'sqs': ['AWS.SimpleQueueService.BatchEntryIdsNotDistinct', 'AWS.SimpleQueueService.BatchRequestTooLong', 'AWS.SimpleQueueService.EmptyBatchRequest', 'InvalidAttributeName', 'AWS.SimpleQueueService.InvalidBatchEntryId', 'InvalidIdFormat', 'InvalidMessageContents', 'AWS.SimpleQueueService.MessageNotInflight', 'OverLimit', 'AWS.SimpleQueueService.PurgeQueueInProgress', 'AWS.SimpleQueueService.QueueDeletedRecently', 'AWS.SimpleQueueService.NonExistentQueue', 'QueueAlreadyExists', 'ReceiptHandleIsInvalid', 'AWS.SimpleQueueService.TooManyEntriesInBatchRequest', 'AWS.SimpleQueueService.UnsupportedOperation'],
     'ssm': ['AlreadyExistsException', 'AssociatedInstances', 'AssociationAlreadyExists', 'AssociationDoesNotExist', 'AssociationExecutionDoesNotExist', 'AssociationLimitExceeded', 'AssociationVersionLimitExceeded', 'AutomationDefinitionNotFoundException', 'AutomationDefinitionVersionNotFoundException', 'AutomationExecutionLimitExceededException', 'AutomationExecutionNotFoundException', 'AutomationStepNotFoundException', 'ComplianceTypeCountLimitExceededException', 'CustomSchemaCountLimitExceededException', 'DocumentAlreadyExists', 'DocumentLimitExceeded', 'DocumentPermissionLimit', 'DocumentVersionLimitExceeded', 'DoesNotExistException', 'DuplicateDocumentContent', 'DuplicateDocumentVersionName', 'DuplicateInstanceId', 'FeatureNotAvailableException', 'HierarchyLevelLimitExceededException', 'HierarchyTypeMismatchException', 'IdempotentParameterMismatch', 'InternalServerError', 'InvalidActivation', 'InvalidActivationId', 'InvalidAggregatorException', 'InvalidAllowedPatternException', 'InvalidAssociation', 'InvalidAssociationVersion', 'InvalidAutomationExecutionParametersException', 'InvalidAutomationSignalException', 'InvalidAutomationStatusUpdateException', 'InvalidCommandId', 'InvalidDeleteInventoryParametersException', 'InvalidDeletionIdException', 'InvalidDocument', 'InvalidDocumentContent', 'InvalidDocumentOperation', 'InvalidDocumentSchemaVersion', 'InvalidDocumentVersion', 'InvalidFilter', 'InvalidFilterKey', 'InvalidFilterOption', 'InvalidFilterValue', 'InvalidInstanceId', 'InvalidInstanceInformationFilterValue', 'InvalidInventoryGroupException', 'InvalidInventoryItemContextException', 'InvalidInventoryRequestException', 'InvalidItemContentException', 'InvalidKeyId', 'InvalidNextToken', 'InvalidNotificationConfig', 'InvalidOptionException', 'InvalidOutputFolder', 'InvalidOutputLocation', 'InvalidParameters', 'InvalidPermissionType', 'InvalidPluginName', 'InvalidResourceId', 'InvalidResourceType', 'InvalidResultAttributeException', 'InvalidRole', 'InvalidSchedule', 'InvalidTarget', 'InvalidTypeNameException', 'InvalidUpdate', 'InvocationDoesNotExist', 'ItemContentMismatchException', 'ItemSizeLimitExceededException', 'MaxDocumentSizeExceeded', 'ParameterAlreadyExists', 'ParameterLimitExceeded', 'ParameterMaxVersionLimitExceeded', 'ParameterNotFound', 'ParameterPatternMismatchException', 'ParameterVersionLabelLimitExceeded', 'ParameterVersionNotFound', 'ResourceDataSyncAlreadyExistsException', 'ResourceDataSyncCountExceededException', 'ResourceDataSyncInvalidConfigurationException', 'ResourceDataSyncNotFoundException', 'ResourceInUseException', 'ResourceLimitExceededException', 'StatusUnchanged', 'SubTypeCountLimitExceededException', 'TargetInUseException', 'TargetNotConnected', 'TooManyTagsError', 'TooManyUpdates', 'TotalSizeLimitExceededException', 'UnsupportedInventoryItemContextException', 'UnsupportedInventorySchemaVersionException', 'UnsupportedOperatingSystem', 'UnsupportedParameterType', 'UnsupportedPlatformType'],
     'stepfunctions': ['ActivityDoesNotExist', 'ActivityLimitExceeded', 'ActivityWorkerLimitExceeded', 'ExecutionAlreadyExists', 'ExecutionDoesNotExist', 'ExecutionLimitExceeded', 'InvalidArn', 'InvalidDefinition', 'InvalidExecutionInput', 'InvalidName', 'InvalidOutput', 'InvalidToken', 'MissingRequiredParameter', 'ResourceNotFound', 'StateMachineAlreadyExists', 'StateMachineDeleting', 'StateMachineDoesNotExist', 'StateMachineLimitExceeded', 'TaskDoesNotExist', 'TaskTimedOut', 'TooManyTags'],
     'sts': ['ExpiredTokenException', 'IDPCommunicationError', 'IDPRejectedClaim', 'InvalidAuthorizationMessageException', 'InvalidIdentityToken', 'MalformedPolicyDocument', 'PackedPolicyTooLarge', 'RegionDisabledException'],
     'xray': ['InvalidRequestException', 'RuleLimitExceededException', 'ThrottledException']}

    As a few others already mentioned, you can catch certain errors using the service client (service_client.exceptions.<ExceptionClass>) or resource (service_resource.meta.client.exceptions.<ExceptionClass>), however it is not well documented (also which exceptions belong to which clients). So here is how to get the complete mapping at time of writing (January 2020) in region EU (Ireland) (eu-west-1):

    import boto3, pprint
    
    region_name = 'eu-west-1'
    session = boto3.Session(region_name=region_name)
    exceptions = {
      service: list(boto3.client('sts').exceptions._code_to_exception)
      for service in session.get_available_services()
    }
    pprint.pprint(exceptions, width=20000)
    

    Here is a subset of the pretty large document:

    {'acm': ['InvalidArnException', 'InvalidDomainValidationOptionsException', 'InvalidStateException', 'InvalidTagException', 'LimitExceededException', 'RequestInProgressException', 'ResourceInUseException', 'ResourceNotFoundException', 'TooManyTagsException'],
     'apigateway': ['BadRequestException', 'ConflictException', 'LimitExceededException', 'NotFoundException', 'ServiceUnavailableException', 'TooManyRequestsException', 'UnauthorizedException'],
     'athena': ['InternalServerException', 'InvalidRequestException', 'TooManyRequestsException'],
     'autoscaling': ['AlreadyExists', 'InvalidNextToken', 'LimitExceeded', 'ResourceContention', 'ResourceInUse', 'ScalingActivityInProgress', 'ServiceLinkedRoleFailure'],
     'cloudformation': ['AlreadyExistsException', 'ChangeSetNotFound', 'CreatedButModifiedException', 'InsufficientCapabilitiesException', 'InvalidChangeSetStatus', 'InvalidOperationException', 'LimitExceededException', 'NameAlreadyExistsException', 'OperationIdAlreadyExistsException', 'OperationInProgressException', 'OperationNotFoundException', 'StackInstanceNotFoundException', 'StackSetNotEmptyException', 'StackSetNotFoundException', 'StaleRequestException', 'TokenAlreadyExistsException'],
     'cloudfront': ['AccessDenied', 'BatchTooLarge', 'CNAMEAlreadyExists', 'CannotChangeImmutablePublicKeyFields', 'CloudFrontOriginAccessIdentityAlreadyExists', 'CloudFrontOriginAccessIdentityInUse', 'DistributionAlreadyExists', 'DistributionNotDisabled', 'FieldLevelEncryptionConfigAlreadyExists', 'FieldLevelEncryptionConfigInUse', 'FieldLevelEncryptionProfileAlreadyExists', 'FieldLevelEncryptionProfileInUse', 'FieldLevelEncryptionProfileSizeExceeded', 'IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior', 'IllegalUpdate', 'InconsistentQuantities', 'InvalidArgument', 'InvalidDefaultRootObject', 'InvalidErrorCode', 'InvalidForwardCookies', 'InvalidGeoRestrictionParameter', 'InvalidHeadersForS3Origin', 'InvalidIfMatchVersion', 'InvalidLambdaFunctionAssociation', 'InvalidLocationCode', 'InvalidMinimumProtocolVersion', 'InvalidOrigin', 'InvalidOriginAccessIdentity', 'InvalidOriginKeepaliveTimeout', 'InvalidOriginReadTimeout', 'InvalidProtocolSettings', 'InvalidQueryStringParameters', 'InvalidRelativePath', 'InvalidRequiredProtocol', 'InvalidResponseCode', 'InvalidTTLOrder', 'InvalidTagging', 'InvalidViewerCertificate', 'InvalidWebACLId', 'MissingBody', 'NoSuchCloudFrontOriginAccessIdentity', 'NoSuchDistribution', 'NoSuchFieldLevelEncryptionConfig', 'NoSuchFieldLevelEncryptionProfile', 'NoSuchInvalidation', 'NoSuchOrigin', 'NoSuchPublicKey', 'NoSuchResource', 'NoSuchStreamingDistribution', 'PreconditionFailed', 'PublicKeyAlreadyExists', 'PublicKeyInUse', 'QueryArgProfileEmpty', 'StreamingDistributionAlreadyExists', 'StreamingDistributionNotDisabled', 'TooManyCacheBehaviors', 'TooManyCertificates', 'TooManyCloudFrontOriginAccessIdentities', 'TooManyCookieNamesInWhiteList', 'TooManyDistributionCNAMEs', 'TooManyDistributions', 'TooManyDistributionsAssociatedToFieldLevelEncryptionConfig', 'TooManyDistributionsWithLambdaAssociations', 'TooManyFieldLevelEncryptionConfigs', 'TooManyFieldLevelEncryptionContentTypeProfiles', 'TooManyFieldLevelEncryptionEncryptionEntities', 'TooManyFieldLevelEncryptionFieldPatterns', 'TooManyFieldLevelEncryptionProfiles', 'TooManyFieldLevelEncryptionQueryArgProfiles', 'TooManyHeadersInForwardedValues', 'TooManyInvalidationsInProgress', 'TooManyLambdaFunctionAssociations', 'TooManyOriginCustomHeaders', 'TooManyOriginGroupsPerDistribution', 'TooManyOrigins', 'TooManyPublicKeys', 'TooManyQueryStringParameters', 'TooManyStreamingDistributionCNAMEs', 'TooManyStreamingDistributions', 'TooManyTrustedSigners', 'TrustedSignerDoesNotExist'],
     'cloudtrail': ['CloudTrailARNInvalidException', 'CloudTrailAccessNotEnabledException', 'CloudWatchLogsDeliveryUnavailableException', 'InsufficientDependencyServiceAccessPermissionException', 'InsufficientEncryptionPolicyException', 'InsufficientS3BucketPolicyException', 'InsufficientSnsTopicPolicyException', 'InvalidCloudWatchLogsLogGroupArnException', 'InvalidCloudWatchLogsRoleArnException', 'InvalidEventSelectorsException', 'InvalidHomeRegionException', 'InvalidKmsKeyIdException', 'InvalidLookupAttributesException', 'InvalidMaxResultsException', 'InvalidNextTokenException', 'InvalidParameterCombinationException', 'InvalidS3BucketNameException', 'InvalidS3PrefixException', 'InvalidSnsTopicNameException', 'InvalidTagParameterException', 'InvalidTimeRangeException', 'InvalidTokenException', 'InvalidTrailNameException', 'KmsException', 'KmsKeyDisabledException', 'KmsKeyNotFoundException', 'MaximumNumberOfTrailsExceededException', 'NotOrganizationMasterAccountException', 'OperationNotPermittedException', 'OrganizationNotInAllFeaturesModeException', 'OrganizationsNotInUseException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3BucketDoesNotExistException', 'TagsLimitExceededException', 'TrailAlreadyExistsException', 'TrailNotFoundException', 'TrailNotProvidedException', 'UnsupportedOperationException'],
     'cloudwatch': ['InvalidParameterInput', 'ResourceNotFound', 'InternalServiceError', 'InvalidFormat', 'InvalidNextToken', 'InvalidParameterCombination', 'InvalidParameterValue', 'LimitExceeded', 'MissingParameter'],
     'codebuild': ['AccountLimitExceededException', 'InvalidInputException', 'OAuthProviderException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException'],
     'config': ['InsufficientDeliveryPolicyException', 'InsufficientPermissionsException', 'InvalidConfigurationRecorderNameException', 'InvalidDeliveryChannelNameException', 'InvalidLimitException', 'InvalidNextTokenException', 'InvalidParameterValueException', 'InvalidRecordingGroupException', 'InvalidResultTokenException', 'InvalidRoleException', 'InvalidS3KeyPrefixException', 'InvalidSNSTopicARNException', 'InvalidTimeRangeException', 'LastDeliveryChannelDeleteFailedException', 'LimitExceededException', 'MaxNumberOfConfigRulesExceededException', 'MaxNumberOfConfigurationRecordersExceededException', 'MaxNumberOfDeliveryChannelsExceededException', 'MaxNumberOfRetentionConfigurationsExceededException', 'NoAvailableConfigurationRecorderException', 'NoAvailableDeliveryChannelException', 'NoAvailableOrganizationException', 'NoRunningConfigurationRecorderException', 'NoSuchBucketException', 'NoSuchConfigRuleException', 'NoSuchConfigurationAggregatorException', 'NoSuchConfigurationRecorderException', 'NoSuchDeliveryChannelException', 'NoSuchRetentionConfigurationException', 'OrganizationAccessDeniedException', 'OrganizationAllFeaturesNotEnabledException', 'OversizedConfigurationItemException', 'ResourceInUseException', 'ResourceNotDiscoveredException', 'ValidationException'],
     'dynamodb': ['BackupInUseException', 'BackupNotFoundException', 'ConditionalCheckFailedException', 'ContinuousBackupsUnavailableException', 'GlobalTableAlreadyExistsException', 'GlobalTableNotFoundException', 'IdempotentParameterMismatchException', 'IndexNotFoundException', 'InternalServerError', 'InvalidRestoreTimeException', 'ItemCollectionSizeLimitExceededException', 'LimitExceededException', 'PointInTimeRecoveryUnavailableException', 'ProvisionedThroughputExceededException', 'ReplicaAlreadyExistsException', 'ReplicaNotFoundException', 'RequestLimitExceeded', 'ResourceInUseException', 'ResourceNotFoundException', 'TableAlreadyExistsException', 'TableInUseException', 'TableNotFoundException', 'TransactionCanceledException', 'TransactionConflictException', 'TransactionInProgressException'],
     'ec2': [],
     'ecr': ['EmptyUploadException', 'ImageAlreadyExistsException', 'ImageNotFoundException', 'InvalidLayerException', 'InvalidLayerPartException', 'InvalidParameterException', 'InvalidTagParameterException', 'LayerAlreadyExistsException', 'LayerInaccessibleException', 'LayerPartTooSmallException', 'LayersNotFoundException', 'LifecyclePolicyNotFoundException', 'LifecyclePolicyPreviewInProgressException', 'LifecyclePolicyPreviewNotFoundException', 'LimitExceededException', 'RepositoryAlreadyExistsException', 'RepositoryNotEmptyException', 'RepositoryNotFoundException', 'RepositoryPolicyNotFoundException', 'ServerException', 'TooManyTagsException', 'UploadNotFoundException'],
     'ecs': ['AccessDeniedException', 'AttributeLimitExceededException', 'BlockedException', 'ClientException', 'ClusterContainsContainerInstancesException', 'ClusterContainsServicesException', 'ClusterContainsTasksException', 'ClusterNotFoundException', 'InvalidParameterException', 'MissingVersionException', 'NoUpdateAvailableException', 'PlatformTaskDefinitionIncompatibilityException', 'PlatformUnknownException', 'ResourceNotFoundException', 'ServerException', 'ServiceNotActiveException', 'ServiceNotFoundException', 'TargetNotFoundException', 'UnsupportedFeatureException', 'UpdateInProgressException'],
     'efs': ['BadRequest', 'DependencyTimeout', 'FileSystemAlreadyExists', 'FileSystemInUse', 'FileSystemLimitExceeded', 'FileSystemNotFound', 'IncorrectFileSystemLifeCycleState', 'IncorrectMountTargetState', 'InsufficientThroughputCapacity', 'InternalServerError', 'IpAddressInUse', 'MountTargetConflict', 'MountTargetNotFound', 'NetworkInterfaceLimitExceeded', 'NoFreeAddressesInSubnet', 'SecurityGroupLimitExceeded', 'SecurityGroupNotFound', 'SubnetNotFound', 'ThroughputLimitExceeded', 'TooManyRequests', 'UnsupportedAvailabilityZone'],
     'eks': ['ClientException', 'InvalidParameterException', 'InvalidRequestException', 'ResourceInUseException', 'ResourceLimitExceededException', 'ResourceNotFoundException', 'ServerException', 'ServiceUnavailableException', 'UnsupportedAvailabilityZoneException'],
     'elasticache': ['APICallRateForCustomerExceeded', 'AuthorizationAlreadyExists', 'AuthorizationNotFound', 'CacheClusterAlreadyExists', 'CacheClusterNotFound', 'CacheParameterGroupAlreadyExists', 'CacheParameterGroupNotFound', 'CacheParameterGroupQuotaExceeded', 'CacheSecurityGroupAlreadyExists', 'CacheSecurityGroupNotFound', 'QuotaExceeded.CacheSecurityGroup', 'CacheSubnetGroupAlreadyExists', 'CacheSubnetGroupInUse', 'CacheSubnetGroupNotFoundFault', 'CacheSubnetGroupQuotaExceeded', 'CacheSubnetQuotaExceededFault', 'ClusterQuotaForCustomerExceeded', 'InsufficientCacheClusterCapacity', 'InvalidARN', 'InvalidCacheClusterState', 'InvalidCacheParameterGroupState', 'InvalidCacheSecurityGroupState', 'InvalidParameterCombination', 'InvalidParameterValue', 'InvalidReplicationGroupState', 'InvalidSnapshotState', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'NoOperationFault', 'NodeGroupNotFoundFault', 'NodeGroupsPerReplicationGroupQuotaExceeded', 'NodeQuotaForClusterExceeded', 'NodeQuotaForCustomerExceeded', 'ReplicationGroupAlreadyExists', 'ReplicationGroupNotFoundFault', 'ReservedCacheNodeAlreadyExists', 'ReservedCacheNodeNotFound', 'ReservedCacheNodeQuotaExceeded', 'ReservedCacheNodesOfferingNotFound', 'ServiceLinkedRoleNotFoundFault', 'SnapshotAlreadyExistsFault', 'SnapshotFeatureNotSupportedFault', 'SnapshotNotFoundFault', 'SnapshotQuotaExceededFault', 'SubnetInUse', 'TagNotFound', 'TagQuotaPerResourceExceeded', 'TestFailoverNotAvailableFault'],
     'elasticbeanstalk': ['CodeBuildNotInServiceRegionException', 'ElasticBeanstalkServiceException', 'InsufficientPrivilegesException', 'InvalidRequestException', 'ManagedActionInvalidStateException', 'OperationInProgressFailure', 'PlatformVersionStillReferencedException', 'ResourceNotFoundException', 'ResourceTypeNotSupportedException', 'S3LocationNotInServiceRegionException', 'S3SubscriptionRequiredException', 'SourceBundleDeletionFailure', 'TooManyApplicationVersionsException', 'TooManyApplicationsException', 'TooManyBucketsException', 'TooManyConfigurationTemplatesException', 'TooManyEnvironmentsException', 'TooManyPlatformsException', 'TooManyTagsException'],
     'elb': ['LoadBalancerNotFound', 'CertificateNotFound', 'DependencyThrottle', 'DuplicateLoadBalancerName', 'DuplicateListener', 'DuplicatePolicyName', 'DuplicateTagKeys', 'InvalidConfigurationRequest', 'InvalidInstance', 'InvalidScheme', 'InvalidSecurityGroup', 'InvalidSubnet', 'ListenerNotFound', 'LoadBalancerAttributeNotFound', 'OperationNotPermitted', 'PolicyNotFound', 'PolicyTypeNotFound', 'SubnetNotFound', 'TooManyLoadBalancers', 'TooManyPolicies', 'TooManyTags', 'UnsupportedProtocol'],
     'emr': ['InternalServerError', 'InternalServerException', 'InvalidRequestException'],
     'es': ['BaseException', 'DisabledOperationException', 'InternalException', 'InvalidTypeException', 'LimitExceededException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ValidationException'],
     'events': ['ConcurrentModificationException', 'InternalException', 'InvalidEventPatternException', 'LimitExceededException', 'ManagedRuleException', 'PolicyLengthExceededException', 'ResourceNotFoundException'],
     'firehose': ['ConcurrentModificationException', 'InvalidArgumentException', 'LimitExceededException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glacier': ['InsufficientCapacityException', 'InvalidParameterValueException', 'LimitExceededException', 'MissingParameterValueException', 'PolicyEnforcedException', 'RequestTimeoutException', 'ResourceNotFoundException', 'ServiceUnavailableException'],
     'glue': ['AccessDeniedException', 'AlreadyExistsException', 'ConcurrentModificationException', 'ConcurrentRunsExceededException', 'ConditionCheckFailureException', 'CrawlerNotRunningException', 'CrawlerRunningException', 'CrawlerStoppingException', 'EntityNotFoundException', 'GlueEncryptionException', 'IdempotentParameterMismatchException', 'InternalServiceException', 'InvalidInputException', 'NoScheduleException', 'OperationTimeoutException', 'ResourceNumberLimitExceededException', 'SchedulerNotRunningException', 'SchedulerRunningException', 'SchedulerTransitioningException', 'ValidationException', 'VersionMismatchException'],
     'iam': ['ConcurrentModification', 'ReportExpired', 'ReportNotPresent', 'ReportInProgress', 'DeleteConflict', 'DuplicateCertificate', 'DuplicateSSHPublicKey', 'EntityAlreadyExists', 'EntityTemporarilyUnmodifiable', 'InvalidAuthenticationCode', 'InvalidCertificate', 'InvalidInput', 'InvalidPublicKey', 'InvalidUserType', 'KeyPairMismatch', 'LimitExceeded', 'MalformedCertificate', 'MalformedPolicyDocument', 'NoSuchEntity', 'PasswordPolicyViolation', 'PolicyEvaluation', 'PolicyNotAttachable', 'ServiceFailure', 'NotSupportedService', 'UnmodifiableEntity', 'UnrecognizedPublicKeyEncoding'],
     'kinesis': ['ExpiredIteratorException', 'ExpiredNextTokenException', 'InternalFailureException', 'InvalidArgumentException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'KMSOptInRequired', 'KMSThrottlingException', 'LimitExceededException', 'ProvisionedThroughputExceededException', 'ResourceInUseException', 'ResourceNotFoundException'],
     'kms': ['AlreadyExistsException', 'CloudHsmClusterInUseException', 'CloudHsmClusterInvalidConfigurationException', 'CloudHsmClusterNotActiveException', 'CloudHsmClusterNotFoundException', 'CloudHsmClusterNotRelatedException', 'CustomKeyStoreHasCMKsException', 'CustomKeyStoreInvalidStateException', 'CustomKeyStoreNameInUseException', 'CustomKeyStoreNotFoundException', 'DependencyTimeoutException', 'DisabledException', 'ExpiredImportTokenException', 'IncorrectKeyMaterialException', 'IncorrectTrustAnchorException', 'InvalidAliasNameException', 'InvalidArnException', 'InvalidCiphertextException', 'InvalidGrantIdException', 'InvalidGrantTokenException', 'InvalidImportTokenException', 'InvalidKeyUsageException', 'InvalidMarkerException', 'KMSInternalException', 'KMSInvalidStateException', 'KeyUnavailableException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'NotFoundException', 'TagException', 'UnsupportedOperationException'],
     'lambda': ['CodeStorageExceededException', 'EC2AccessDeniedException', 'EC2ThrottledException', 'EC2UnexpectedException', 'ENILimitReachedException', 'InvalidParameterValueException', 'InvalidRequestContentException', 'InvalidRuntimeException', 'InvalidSecurityGroupIDException', 'InvalidSubnetIDException', 'InvalidZipFileException', 'KMSAccessDeniedException', 'KMSDisabledException', 'KMSInvalidStateException', 'KMSNotFoundException', 'PolicyLengthExceededException', 'PreconditionFailedException', 'RequestTooLargeException', 'ResourceConflictException', 'ResourceInUseException', 'ResourceNotFoundException', 'ServiceException', 'SubnetIPAddressLimitReachedException', 'TooManyRequestsException', 'UnsupportedMediaTypeException'],
     'logs': ['DataAlreadyAcceptedException', 'InvalidOperationException', 'InvalidParameterException', 'InvalidSequenceTokenException', 'LimitExceededException', 'MalformedQueryException', 'OperationAbortedException', 'ResourceAlreadyExistsException', 'ResourceNotFoundException', 'ServiceUnavailableException', 'UnrecognizedClientException'],
     'neptune': ['AuthorizationNotFound', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceNotFound', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupNotFound', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidRestoreFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupNotFoundFault', 'ProvisionedIopsNotAvailableInAZFault', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'rds': ['AuthorizationAlreadyExists', 'AuthorizationNotFound', 'AuthorizationQuotaExceeded', 'BackupPolicyNotFoundFault', 'CertificateNotFound', 'DBClusterAlreadyExistsFault', 'DBClusterBacktrackNotFoundFault', 'DBClusterEndpointAlreadyExistsFault', 'DBClusterEndpointNotFoundFault', 'DBClusterEndpointQuotaExceededFault', 'DBClusterNotFoundFault', 'DBClusterParameterGroupNotFound', 'DBClusterQuotaExceededFault', 'DBClusterRoleAlreadyExists', 'DBClusterRoleNotFound', 'DBClusterRoleQuotaExceeded', 'DBClusterSnapshotAlreadyExistsFault', 'DBClusterSnapshotNotFoundFault', 'DBInstanceAlreadyExists', 'DBInstanceAutomatedBackupNotFound', 'DBInstanceAutomatedBackupQuotaExceeded', 'DBInstanceNotFound', 'DBInstanceRoleAlreadyExists', 'DBInstanceRoleNotFound', 'DBInstanceRoleQuotaExceeded', 'DBLogFileNotFoundFault', 'DBParameterGroupAlreadyExists', 'DBParameterGroupNotFound', 'DBParameterGroupQuotaExceeded', 'DBSecurityGroupAlreadyExists', 'DBSecurityGroupNotFound', 'DBSecurityGroupNotSupported', 'QuotaExceeded.DBSecurityGroup', 'DBSnapshotAlreadyExists', 'DBSnapshotNotFound', 'DBSubnetGroupAlreadyExists', 'DBSubnetGroupDoesNotCoverEnoughAZs', 'DBSubnetGroupNotAllowedFault', 'DBSubnetGroupNotFoundFault', 'DBSubnetGroupQuotaExceeded', 'DBSubnetQuotaExceededFault', 'DBUpgradeDependencyFailure', 'DomainNotFoundFault', 'EventSubscriptionQuotaExceeded', 'GlobalClusterAlreadyExistsFault', 'GlobalClusterNotFoundFault', 'GlobalClusterQuotaExceededFault', 'InstanceQuotaExceeded', 'InsufficientDBClusterCapacityFault', 'InsufficientDBInstanceCapacity', 'InsufficientStorageClusterCapacity', 'InvalidDBClusterCapacityFault', 'InvalidDBClusterEndpointStateFault', 'InvalidDBClusterSnapshotStateFault', 'InvalidDBClusterStateFault', 'InvalidDBInstanceAutomatedBackupState', 'InvalidDBInstanceState', 'InvalidDBParameterGroupState', 'InvalidDBSecurityGroupState', 'InvalidDBSnapshotState', 'InvalidDBSubnetGroupFault', 'InvalidDBSubnetGroupStateFault', 'InvalidDBSubnetStateFault', 'InvalidEventSubscriptionState', 'InvalidGlobalClusterStateFault', 'InvalidOptionGroupStateFault', 'InvalidRestoreFault', 'InvalidS3BucketFault', 'InvalidSubnet', 'InvalidVPCNetworkStateFault', 'KMSKeyNotAccessibleFault', 'OptionGroupAlreadyExistsFault', 'OptionGroupNotFoundFault', 'OptionGroupQuotaExceededFault', 'PointInTimeRestoreNotEnabled', 'ProvisionedIopsNotAvailableInAZFault', 'ReservedDBInstanceAlreadyExists', 'ReservedDBInstanceNotFound', 'ReservedDBInstanceQuotaExceeded', 'ReservedDBInstancesOfferingNotFound', 'ResourceNotFoundFault', 'SNSInvalidTopic', 'SNSNoAuthorization', 'SNSTopicArnNotFound', 'SharedSnapshotQuotaExceeded', 'SnapshotQuotaExceeded', 'SourceNotFound', 'StorageQuotaExceeded', 'StorageTypeNotSupported', 'SubnetAlreadyInUse', 'SubscriptionAlreadyExist', 'SubscriptionCategoryNotFound', 'SubscriptionNotFound'],
     'route53': ['ConcurrentModification', 'ConflictingDomainExists', 'ConflictingTypes', 'DelegationSetAlreadyCreated', 'DelegationSetAlreadyReusable', 'DelegationSetInUse', 'DelegationSetNotAvailable', 'DelegationSetNotReusable', 'HealthCheckAlreadyExists', 'HealthCheckInUse', 'HealthCheckVersionMismatch', 'HostedZoneAlreadyExists', 'HostedZoneNotEmpty', 'HostedZoneNotFound', 'HostedZoneNotPrivate', 'IncompatibleVersion', 'InsufficientCloudWatchLogsResourcePolicy', 'InvalidArgument', 'InvalidChangeBatch', 'InvalidDomainName', 'InvalidInput', 'InvalidPaginationToken', 'InvalidTrafficPolicyDocument', 'InvalidVPCId', 'LastVPCAssociation', 'LimitsExceeded', 'NoSuchChange', 'NoSuchCloudWatchLogsLogGroup', 'NoSuchDelegationSet', 'NoSuchGeoLocation', 'NoSuchHealthCheck', 'NoSuchHostedZone', 'NoSuchQueryLoggingConfig', 'NoSuchTrafficPolicy', 'NoSuchTrafficPolicyInstance', 'NotAuthorizedException', 'PriorRequestNotComplete', 'PublicZoneVPCAssociation', 'QueryLoggingConfigAlreadyExists', 'ThrottlingException', 'TooManyHealthChecks', 'TooManyHostedZones', 'TooManyTrafficPolicies', 'TooManyTrafficPolicyInstances', 'TooManyTrafficPolicyVersionsForCurrentPolicy', 'TooManyVPCAssociationAuthorizations', 'TrafficPolicyAlreadyExists', 'TrafficPolicyInUse', 'TrafficPolicyInstanceAlreadyExists', 'VPCAssociationAuthorizationNotFound', 'VPCAssociationNotFound'],
     's3': ['BucketAlreadyExists', 'BucketAlreadyOwnedByYou', 'NoSuchBucket', 'NoSuchKey', 'NoSuchUpload', 'ObjectAlreadyInActiveTierError', 'ObjectNotInActiveTierError'],
     'sagemaker': ['ResourceInUse', 'ResourceLimitExceeded', 'ResourceNotFound'],
     'secretsmanager': ['DecryptionFailure', 'EncryptionFailure', 'InternalServiceError', 'InvalidNextTokenException', 'InvalidParameterException', 'InvalidRequestException', 'LimitExceededException', 'MalformedPolicyDocumentException', 'PreconditionNotMetException', 'ResourceExistsException', 'ResourceNotFoundException'],
     'ses': ['AccountSendingPausedException', 'AlreadyExists', 'CannotDelete', 'ConfigurationSetAlreadyExists', 'ConfigurationSetDoesNotExist', 'ConfigurationSetSendingPausedException', 'CustomVerificationEmailInvalidContent', 'CustomVerificationEmailTemplateAlreadyExists', 'CustomVerificationEmailTemplateDoesNotExist', 'EventDestinationAlreadyExists', 'EventDestinationDoesNotExist', 'FromEmailAddressNotVerified', 'InvalidCloudWatchDestination', 'InvalidConfigurationSet', 'InvalidFirehoseDestination', 'InvalidLambdaFunction', 'InvalidPolicy', 'InvalidRenderingParameter', 'InvalidS3Configuration', 'InvalidSNSDestination', 'InvalidSnsTopic', 'InvalidTemplate', 'InvalidTrackingOptions', 'LimitExceeded', 'MailFromDomainNotVerifiedException', 'MessageRejected', 'MissingRenderingAttribute', 'ProductionAccessNotGranted', 'RuleDoesNotExist', 'RuleSetDoesNotExist', 'TemplateDoesNotExist', 'TrackingOptionsAlreadyExistsException', 'TrackingOptionsDoesNotExistException'],
     'sns': ['AuthorizationError', 'EndpointDisabled', 'FilterPolicyLimitExceeded', 'InternalError', 'InvalidParameter', 'ParameterValueInvalid', 'InvalidSecurity', 'KMSAccessDenied', 'KMSDisabled', 'KMSInvalidState', 'KMSNotFound', 'KMSOptInRequired', 'KMSThrottling', 'NotFound', 'PlatformApplicationDisabled', 'SubscriptionLimitExceeded', 'Throttled', 'TopicLimitExceeded'],
     'sqs': ['AWS.SimpleQueueService.BatchEntryIdsNotDistinct', 'AWS.SimpleQueueService.BatchRequestTooLong', 'AWS.SimpleQueueService.EmptyBatchRequest', 'InvalidAttributeName', 'AWS.SimpleQueueService.InvalidBatchEntryId', 'InvalidIdFormat', 'InvalidMessageContents', 'AWS.SimpleQueueService.MessageNotInflight', 'OverLimit', 'AWS.SimpleQueueService.PurgeQueueInProgress', 'AWS.SimpleQueueService.QueueDeletedRecently', 'AWS.SimpleQueueService.NonExistentQueue', 'QueueAlreadyExists', 'ReceiptHandleIsInvalid', 'AWS.SimpleQueueService.TooManyEntriesInBatchRequest', 'AWS.SimpleQueueService.UnsupportedOperation'],
     'ssm': ['AlreadyExistsException', 'AssociatedInstances', 'AssociationAlreadyExists', 'AssociationDoesNotExist', 'AssociationExecutionDoesNotExist', 'AssociationLimitExceeded', 'AssociationVersionLimitExceeded', 'AutomationDefinitionNotFoundException', 'AutomationDefinitionVersionNotFoundException', 'AutomationExecutionLimitExceededException', 'AutomationExecutionNotFoundException', 'AutomationStepNotFoundException', 'ComplianceTypeCountLimitExceededException', 'CustomSchemaCountLimitExceededException', 'DocumentAlreadyExists', 'DocumentLimitExceeded', 'DocumentPermissionLimit', 'DocumentVersionLimitExceeded', 'DoesNotExistException', 'DuplicateDocumentContent', 'DuplicateDocumentVersionName', 'DuplicateInstanceId', 'FeatureNotAvailableException', 'HierarchyLevelLimitExceededException', 'HierarchyTypeMismatchException', 'IdempotentParameterMismatch', 'InternalServerError', 'InvalidActivation', 'InvalidActivationId', 'InvalidAggregatorException', 'InvalidAllowedPatternException', 'InvalidAssociation', 'InvalidAssociationVersion', 'InvalidAutomationExecutionParametersException', 'InvalidAutomationSignalException', 'InvalidAutomationStatusUpdateException', 'InvalidCommandId', 'InvalidDeleteInventoryParametersException', 'InvalidDeletionIdException', 'InvalidDocument', 'InvalidDocumentContent', 'InvalidDocumentOperation', 'InvalidDocumentSchemaVersion', 'InvalidDocumentVersion', 'InvalidFilter', 'InvalidFilterKey', 'InvalidFilterOption', 'InvalidFilterValue', 'InvalidInstanceId', 'InvalidInstanceInformationFilterValue', 'InvalidInventoryGroupException', 'InvalidInventoryItemContextException', 'InvalidInventoryRequestException', 'InvalidItemContentException', 'InvalidKeyId', 'InvalidNextToken', 'InvalidNotificationConfig', 'InvalidOptionException', 'InvalidOutputFolder', 'InvalidOutputLocation', 'InvalidParameters', 'InvalidPermissionType', 'InvalidPluginName', 'InvalidResourceId', 'InvalidResourceType', 'InvalidResultAttributeException', 'InvalidRole', 'InvalidSchedule', 'InvalidTarget', 'InvalidTypeNameException', 'InvalidUpdate', 'InvocationDoesNotExist', 'ItemContentMismatchException', 'ItemSizeLimitExceededException', 'MaxDocumentSizeExceeded', 'ParameterAlreadyExists', 'ParameterLimitExceeded', 'ParameterMaxVersionLimitExceeded', 'ParameterNotFound', 'ParameterPatternMismatchException', 'ParameterVersionLabelLimitExceeded', 'ParameterVersionNotFound', 'ResourceDataSyncAlreadyExistsException', 'ResourceDataSyncCountExceededException', 'ResourceDataSyncInvalidConfigurationException', 'ResourceDataSyncNotFoundException', 'ResourceInUseException', 'ResourceLimitExceededException', 'StatusUnchanged', 'SubTypeCountLimitExceededException', 'TargetInUseException', 'TargetNotConnected', 'TooManyTagsError', 'TooManyUpdates', 'TotalSizeLimitExceededException', 'UnsupportedInventoryItemContextException', 'UnsupportedInventorySchemaVersionException', 'UnsupportedOperatingSystem', 'UnsupportedParameterType', 'UnsupportedPlatformType'],
     'stepfunctions': ['ActivityDoesNotExist', 'ActivityLimitExceeded', 'ActivityWorkerLimitExceeded', 'ExecutionAlreadyExists', 'ExecutionDoesNotExist', 'ExecutionLimitExceeded', 'InvalidArn', 'InvalidDefinition', 'InvalidExecutionInput', 'InvalidName', 'InvalidOutput', 'InvalidToken', 'MissingRequiredParameter', 'ResourceNotFound', 'StateMachineAlreadyExists', 'StateMachineDeleting', 'StateMachineDoesNotExist', 'StateMachineLimitExceeded', 'TaskDoesNotExist', 'TaskTimedOut', 'TooManyTags'],
     'sts': ['ExpiredTokenException', 'IDPCommunicationError', 'IDPRejectedClaim', 'InvalidAuthorizationMessageException', 'InvalidIdentityToken', 'MalformedPolicyDocument', 'PackedPolicyTooLarge', 'RegionDisabledException'],
     'xray': ['InvalidRequestException', 'RuleLimitExceededException', 'ThrottledException']}
    

    回答 4

    或类名称的比较,例如

    except ClientError as e:
        if 'EntityAlreadyExistsException' == e.__class__.__name__:
            # handle specific error

    因为它们是动态创建的,所以您永远无法导入该类并使用真正的Python捕获它。

    Or a comparison on the class name e.g.

    except ClientError as e:
        if 'EntityAlreadyExistsException' == e.__class__.__name__:
            # handle specific error
    

    Because they are dynamically created you can never import the class and catch it using real Python.


    回答 5

    如果要使用Python3调用sign_up API(AWS Cognito),则可以使用以下代码。

    def registerUser(userObj):
        ''' Registers the user to AWS Cognito.
        '''
    
        # Mobile number is not a mandatory field. 
        if(len(userObj['user_mob_no']) == 0):
            mobilenumber = ''
        else:
            mobilenumber = userObj['user_country_code']+userObj['user_mob_no']
    
        secretKey = bytes(settings.SOCIAL_AUTH_COGNITO_SECRET, 'latin-1')
        clientId = settings.SOCIAL_AUTH_COGNITO_KEY 
    
        digest = hmac.new(secretKey,
                    msg=(userObj['user_name'] + clientId).encode('utf-8'),
                    digestmod=hashlib.sha256
                    ).digest()
        signature = base64.b64encode(digest).decode()
    
        client = boto3.client('cognito-idp', region_name='eu-west-1' ) 
    
        try:
            response = client.sign_up(
                        ClientId=clientId,
                        Username=userObj['user_name'],
                        Password=userObj['password1'],
                        SecretHash=signature,
                        UserAttributes=[
                            {
                                'Name': 'given_name',
                                'Value': userObj['given_name']
                            },
                            {
                                'Name': 'family_name',
                                'Value': userObj['family_name']
                            },
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                            {
                                'Name': 'phone_number',
                                'Value': mobilenumber
                            }
                        ],
                        ValidationData=[
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                        ]
                        ,
                        AnalyticsMetadata={
                            'AnalyticsEndpointId': 'string'
                        },
                        UserContextData={
                            'EncodedData': 'string'
                        }
                    )
        except ClientError as error:
            return {"errorcode": error.response['Error']['Code'],
                "errormessage" : error.response['Error']['Message'] }
        except Exception as e:
            return {"errorcode": "Something went wrong. Try later or contact the admin" }
        return {"success": "User registered successfully. "}

    error.response [‘Error’] [‘Code’]将是InvalidPasswordException,UsernameExistsException等。因此,在主函数或调用函数的地方,您可以编写逻辑以向用户提供有意义的消息。

    响应示例(error.response):

    {
      "Error": {
        "Message": "Password did not conform with policy: Password must have symbol characters",
        "Code": "InvalidPasswordException"
      },
      "ResponseMetadata": {
        "RequestId": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
        "HTTPStatusCode": 400,
        "HTTPHeaders": {
          "date": "Wed, 17 Jul 2019 09:38:32 GMT",
          "content-type": "application/x-amz-json-1.1",
          "content-length": "124",
          "connection": "keep-alive",
          "x-amzn-requestid": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
          "x-amzn-errortype": "InvalidPasswordException:",
          "x-amzn-errormessage": "Password did not conform with policy: Password must have symbol characters"
        },
        "RetryAttempts": 0
      }
    }

    有关更多参考:https : //boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html#CognitoIdentityProvider.Client.sign_up

    If you are calling the sign_up API (AWS Cognito) using Python3, you can use the following code.

    def registerUser(userObj):
        ''' Registers the user to AWS Cognito.
        '''
    
        # Mobile number is not a mandatory field. 
        if(len(userObj['user_mob_no']) == 0):
            mobilenumber = ''
        else:
            mobilenumber = userObj['user_country_code']+userObj['user_mob_no']
    
        secretKey = bytes(settings.SOCIAL_AUTH_COGNITO_SECRET, 'latin-1')
        clientId = settings.SOCIAL_AUTH_COGNITO_KEY 
    
        digest = hmac.new(secretKey,
                    msg=(userObj['user_name'] + clientId).encode('utf-8'),
                    digestmod=hashlib.sha256
                    ).digest()
        signature = base64.b64encode(digest).decode()
    
        client = boto3.client('cognito-idp', region_name='eu-west-1' ) 
    
        try:
            response = client.sign_up(
                        ClientId=clientId,
                        Username=userObj['user_name'],
                        Password=userObj['password1'],
                        SecretHash=signature,
                        UserAttributes=[
                            {
                                'Name': 'given_name',
                                'Value': userObj['given_name']
                            },
                            {
                                'Name': 'family_name',
                                'Value': userObj['family_name']
                            },
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                            {
                                'Name': 'phone_number',
                                'Value': mobilenumber
                            }
                        ],
                        ValidationData=[
                            {
                                'Name': 'email',
                                'Value': userObj['user_email']
                            },
                        ]
                        ,
                        AnalyticsMetadata={
                            'AnalyticsEndpointId': 'string'
                        },
                        UserContextData={
                            'EncodedData': 'string'
                        }
                    )
        except ClientError as error:
            return {"errorcode": error.response['Error']['Code'],
                "errormessage" : error.response['Error']['Message'] }
        except Exception as e:
            return {"errorcode": "Something went wrong. Try later or contact the admin" }
        return {"success": "User registered successfully. "}
    

    error.response[‘Error’][‘Code’] will be InvalidPasswordException, UsernameExistsException etc. So in the main function or where you are calling the function, you can write the logic to provide a meaningful message to the user.

    An example for the response (error.response):

    {
      "Error": {
        "Message": "Password did not conform with policy: Password must have symbol characters",
        "Code": "InvalidPasswordException"
      },
      "ResponseMetadata": {
        "RequestId": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
        "HTTPStatusCode": 400,
        "HTTPHeaders": {
          "date": "Wed, 17 Jul 2019 09:38:32 GMT",
          "content-type": "application/x-amz-json-1.1",
          "content-length": "124",
          "connection": "keep-alive",
          "x-amzn-requestid": "c8a591d5-8c51-4af9-8fad-b38b270c3ca2",
          "x-amzn-errortype": "InvalidPasswordException:",
          "x-amzn-errormessage": "Password did not conform with policy: Password must have symbol characters"
        },
        "RetryAttempts": 0
      }
    }
    

    For further reference : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/cognito-idp.html#CognitoIdentityProvider.Client.sign_up


    回答 6

    万一您不得不处理可能是不友好的logs客户端(CloudWatch Logsput-log-events),这是我必须做的,以正确捕获Boto3客户端异常:

    try:
        ### Boto3 client code here...
    
    except boto_exceptions.ClientError as error:
        Log.warning("Catched client error code %s",
                    error.response['Error']['Code'])
    
        if error.response['Error']['Code'] in ["DataAlreadyAcceptedException",
                                               "InvalidSequenceTokenException"]:
            Log.debug(
                "Fetching sequence_token from boto error response['Error']['Message'] %s",
                error.response["Error"]["Message"])
            # NOTE: apparently there's no sequenceToken attribute in the response so we have
            # to parse response["Error"]["Message"] string
            sequence_token = error.response["Error"]["Message"].split(":")[-1].strip(" ")
            Log.debug("Setting sequence_token to %s", sequence_token)

    这在第一次尝试(使用空LogStream时)和后续尝试中都有效。

    In case you have to deal with the arguably unfriendly logs client (CloudWatch Logs put-log-events), this is what I had to do to properly catch Boto3 client exceptions:

    try:
        ### Boto3 client code here...
    
    except boto_exceptions.ClientError as error:
        Log.warning("Catched client error code %s",
                    error.response['Error']['Code'])
    
        if error.response['Error']['Code'] in ["DataAlreadyAcceptedException",
                                               "InvalidSequenceTokenException"]:
            Log.debug(
                "Fetching sequence_token from boto error response['Error']['Message'] %s",
                error.response["Error"]["Message"])
            # NOTE: apparently there's no sequenceToken attribute in the response so we have
            # to parse response["Error"]["Message"] string
            sequence_token = error.response["Error"]["Message"].split(":")[-1].strip(" ")
            Log.debug("Setting sequence_token to %s", sequence_token)
    

    This works both at first attempt (with empty LogStream) and subsequent ones.


    回答 7

    @armod更新有关在client对象上添加异常的信息之后。我将展示如何查看为客户端类定义的所有异常。

    使用session.create_client()或创建客户端时,会动态生成异常boto3.client()。内部调用方法,botocore.errorfactory.ClientExceptionsFactory._create_client_exceptions()client.exceptions用构造的异常类填充字段。

    所有类名都可以在client.exceptions._code_to_exception字典中找到,因此您可以使用以下代码段列出所有类型:

    client = boto3.client('s3')
    
    for ex_code in client.exceptions._code_to_exception:
        print(ex_code)

    希望能帮助到你。

    Following @armod’s update about exceptions being added right on client objects. I’ll show how you can see all exceptions defined for your client class.

    Exceptions are generated dynamically when you create your client with session.create_client() or boto3.client(). Internally it calls method botocore.errorfactory.ClientExceptionsFactory._create_client_exceptions() and fills client.exceptions field with constructed exception classes.

    All class names are available in client.exceptions._code_to_exception dictionary, so you can list all types with following snippet:

    client = boto3.client('s3')
    
    for ex_code in client.exceptions._code_to_exception:
        print(ex_code)
    

    Hope it helps.


    回答 8

    无法解决问题时,您需要采取一些措施。现在,您正在返回实际的异常。例如,如果这不是用户已经存在的问题,并且您想将其用作get_or_create函数,则可以通过返回现有的用户对象来解决该问题。

    try:
        user = iam_conn.create_user(UserName=username)
        return user
    except botocore.exceptions.ClientError as e:
    
        #this exception could actually be other things other than exists, so you want to evaluate it further in your real code.
        if e.message.startswith(
            'enough of the exception message to identify it as the one you want')
    
            print('that user already exists.')
            user = iam_conn.get_user(UserName=username)
            return user
    
        elif e.message.some_other_condition:
    
             #something else
        else:
             #unhandled ClientError
             raise(e)
    except SomeOtherExceptionTypeYouCareAbout as e:
        #handle it
    
    # any unhandled exception will raise here at this point.
    # if you want a general handler
    
    except Exception as e:
        #handle it.

    就是说,这可能对您的应用程序来说是个问题,在这种情况下,您想将异常处理程序放在调用创建用户函数的代码周围,并让调用函数确定如何处理它,例如,通过询问用户输入另一个用户名,或对您的应用有意义的任何名称。

    You need to do something when it fails to handle the issue. Right now you are returning the actual exception. For example, if its not a problem that the user exists already and you want to use it as a get_or_create function maybe you handle the issue by returning the existing user object.

    try:
        user = iam_conn.create_user(UserName=username)
        return user
    except botocore.exceptions.ClientError as e:
    
        #this exception could actually be other things other than exists, so you want to evaluate it further in your real code.
        if e.message.startswith(
            'enough of the exception message to identify it as the one you want')
    
            print('that user already exists.')
            user = iam_conn.get_user(UserName=username)
            return user
    
        elif e.message.some_other_condition:
    
             #something else
        else:
             #unhandled ClientError
             raise(e)
    except SomeOtherExceptionTypeYouCareAbout as e:
        #handle it
    
    # any unhandled exception will raise here at this point.
    # if you want a general handler
    
    except Exception as e:
        #handle it.
    

    That said, maybe it is a problem for your app, in which case you want to want to put the exception handler around the code that called your create user function and let the calling function determine how to deal with it, for example, by asking the user to input another username, or whatever makes sense for your application.