标签归档:postgresql

安装psycopg2时出错,找不到用于-lssl的库

问题:安装psycopg2时出错,找不到用于-lssl的库

我跑

sudo pip install psycopg2

我得到了一堆看起来像这样的输出:

cc -DNDEBUG -g -fwrapv -Os .....
.....
cc -DNDEBUG -g -fwrapv -Os .....
.....

最后,它说:

ld: library not found for -lssl

clang: error: linker command failed with exit code 1 (use -v to see invocation)

error: command 'cc' failed with exit status 1

----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/bz/pvj1g9xj16d10pjjgbrfl3fw0000gn/T/pip_build_root/psycopg2/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/bz/pvj1g9xj16d10pjjgbrfl3fw0000gn/T/pip-uE3thn-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/bz/pvj1g9xj16d10pjjgbrfl3fw0000gn/T/pip_build_root/psycopg2
Storing debug log for failure in /Users/Tyler/Library/Logs/pip.log

最后,运行easy_install从源代码进行操作都给我相同的错误(关于-lssl找不到关于库的部分)。


运行brew安装(或升级)openssl将产生以下结果

$ brew upgrade openssl
Error: openssl-1.0.1h already installed

谁能帮我吗?

I run

sudo pip install psycopg2

and I get a bunch of output that looks like:

cc -DNDEBUG -g -fwrapv -Os .....
.....
cc -DNDEBUG -g -fwrapv -Os .....
.....

And at the end it says:

ld: library not found for -lssl

clang: error: linker command failed with exit code 1 (use -v to see invocation)

error: command 'cc' failed with exit status 1

----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/bz/pvj1g9xj16d10pjjgbrfl3fw0000gn/T/pip_build_root/psycopg2/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/bz/pvj1g9xj16d10pjjgbrfl3fw0000gn/T/pip-uE3thn-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/bz/pvj1g9xj16d10pjjgbrfl3fw0000gn/T/pip_build_root/psycopg2
Storing debug log for failure in /Users/Tyler/Library/Logs/pip.log

Running easy_install or doing it from source both give me the same error at the end (the part about library not found for -lssl).


Running brew install (or upgrade) openssl yields the below

$ brew upgrade openssl
Error: openssl-1.0.1h already installed

Can anyone help me out?


回答 0

对于在macOS Sierra 10.12(或更高版本,最有可能)上寻求解决方案的用户:我通过安装命令行工具来解决此问题:

xcode-select --install

在那之后,pip install psycopg2应该工作。

如果没有,您也可以尝试链接到brew的openssl:

env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2

通过brew安装openssl。请注意,brew link openssl --force不再起作用:

$ brew link openssl --force                                                                                 17.5s
Warning: Refusing to link: openssl
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
  -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib

正如@macho在下面指出的,如果这仍然不起作用,则可能需要使用--no-cachepip选项,例如

env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip --no-cache install psycopg2

For anyone looking for a solution for this on macOS Sierra 10.12 (or later, most likely): I fixed this by installing the command line tools:

xcode-select --install

After that, pip install psycopg2 should work.

If it doesn’t, you could also try to link against brew’s openssl:

env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2

with openssl installed via brew. Note that the brew link openssl --force does not work anymore:

$ brew link openssl --force                                                                                 17.5s
Warning: Refusing to link: openssl
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
  -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib

As @macho points out below if this still does not work, you might need to use the --no-cache option of pip, e.g.

env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip --no-cache install psycopg2

回答 1

我从brew(brew install openssl)安装了OpenSSL

以下为我工作:

export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/openssl/include"
pip install psycopg2

I had OpenSSL installed from brew (brew install openssl)

The following worked for me:

export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/openssl/include"
pip install psycopg2

回答 2

运行时,brew link openssl我收到以下消息:

$ brew link openssl
Warning: Refusing to link: openssl
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
  -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib

遵循此建议,这是pip您需要使用的命令:

$ pip install -r requirements.txt --global-option=build_ext --global-option="-I/usr/local/opt/openssl/include" --global-option="-L/usr/local/opt/openssl/lib"

When running brew link openssl I get the following message:

$ brew link openssl
Warning: Refusing to link: openssl
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
  -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib

Following this advice here’s the pip command you need to use:

$ pip install -r requirements.txt --global-option=build_ext --global-option="-I/usr/local/opt/openssl/include" --global-option="-L/usr/local/opt/openssl/lib"

回答 3

对我有用的是命令中提供的链接openssl的提示,

$ brew link openssl
Warning: Refusing to link macOS-provided software: openssl
If you need to have openssl first in your PATH run:
  echo 'export PATH="/usr/local/opt/openssl/bin:$PATH"' >> ~/.zshrc

For compilers to find openssl you may need to set:
  export LDFLAGS="-L/usr/local/opt/openssl/lib"
  export CPPFLAGS="-I/usr/local/opt/openssl/include"

$ export LDFLAGS="-L/usr/local/opt/openssl/lib"
$ export CPPFLAGS="-I/usr/local/opt/openssl/include"

$ pip install psycopg2
Collecting psycopg2
  Using cached https://files.pythonhosted.org/packages/23/7e/93c325482c328619870b6cd09370f6dbe1148283daca65115cd63642e60f/psycopg2-2.8.2.tar.gz
Installing collected packages: psycopg2
  Running setup.py install for psycopg2 ... done
Successfully installed psycopg2-2.8.2

What worked for me was the hint provided in the command to link openssl,

$ brew link openssl
Warning: Refusing to link macOS-provided software: openssl
If you need to have openssl first in your PATH run:
  echo 'export PATH="/usr/local/opt/openssl/bin:$PATH"' >> ~/.zshrc

For compilers to find openssl you may need to set:
  export LDFLAGS="-L/usr/local/opt/openssl/lib"
  export CPPFLAGS="-I/usr/local/opt/openssl/include"

$ export LDFLAGS="-L/usr/local/opt/openssl/lib"
$ export CPPFLAGS="-I/usr/local/opt/openssl/include"

$ pip install psycopg2
Collecting psycopg2
  Using cached https://files.pythonhosted.org/packages/23/7e/93c325482c328619870b6cd09370f6dbe1148283daca65115cd63642e60f/psycopg2-2.8.2.tar.gz
Installing collected packages: psycopg2
  Running setup.py install for psycopg2 ... done
Successfully installed psycopg2-2.8.2

回答 4

在莫哈韦沙漠,我将它们添加到.bash_profile

export PATH="/usr/local/opt/openssl/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/curl/lib -L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/curl/include -I/user/local/opt/openssl/include"

然后能够在python 3.7.4 virtualenv中安装psycopg 2.8.3。

重新安装xcode和命令行工具后。

以上所有答案都对您有所帮助!

On mojave I added these to the .bash_profile

export PATH="/usr/local/opt/openssl/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/curl/lib -L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/curl/include -I/user/local/opt/openssl/include"

was then able to install psycopg 2.8.3 in a python 3.7.4 virtualenv.

This after reinstalling xcode and the command line tools.

All the answers above helped!


回答 5

使用MacOS Catalina 10.15.4,以下是对我有用的唯一命令:

env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2

With MacOS Catalina 10.15.4, the following was the only command that worked for me:

env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2

回答 6

这是新的macOs版本的问题,其中pip无法安装cryptography。解决我的问题的是将env提供给install命令:

brew install openssl
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" <YOUR COMMAND HERE>

您可以<YOUR COMMAND HERE>pip install cryptography或替换pip install <SOMETHING THAT REQUIRES cryptography>

归功于本文:修复macOS Sierra致命错误:找不到“ openssl / opensslv.h”或“ openssl / aes.h”文件

This’s the problem of new macOs version, where pip cannot install cryptography. What fixed my problem is to provide the env to the install command:

brew install openssl
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" <YOUR COMMAND HERE>

You can replace <YOUR COMMAND HERE> with pip install cryptography, or pip install <SOMETHING THAT REQUIRES cryptography> for example.

Credit to this article: Fixing macOS Sierra fatal error: ‘openssl/opensslv.h’ or ‘openssl/aes.h’ file not found


回答 7

在使用Homebrew安装OpenSSL之后,以下两个命令使用Fish帮我解决了此问题。

set -gx LDFLAGS "-L/usr/local/opt/openssl/lib"
set -gx CPPFLAGS "-I/usr/local/opt/openssl/include"

使用brew info openssl之后,可以获取最新信息。

Using Fish, the following two commands solved this issue for me after installing OpenSSL with Homebrew.

set -gx LDFLAGS "-L/usr/local/opt/openssl/lib"
set -gx CPPFLAGS "-I/usr/local/opt/openssl/include"

Use brew info openssl to get up-to-date info.


回答 8

最近在High Sierra中出现了此问题,刚刚在virtualenv中安装了Python 3.7。

解决方案是使用更高版本的psycopg2。版本2.7.7有效,而版本2.7.1无效。

Recently had this problem in High Sierra, having just installed Python 3.7 in a virtualenv.

The solution is to use a later version of psycopg2. Version 2.7.7 worked, where 2.7.1 did not.


回答 9

而不是同一作者的psycopg2install,install psycopg2-binary

pip install psycopg2-binary

这是文档关于此PyPI软件包的说明:

您可以通过从PyPI安装psycopg2-binary软件包来获取不需要编译器或外部库的独立软件包:

$ pip install psycopg2-binary

二进制软件包是开发和测试的实际选择,但在生产中,建议使用从源构建的软件包。

Instead of installing psycopg2, install psycopg2-binary, from the same authors:

pip install psycopg2-binary

This is what the documentation says about this PyPI package:

You can […] obtain a stand-alone package, not requiring a compiler or external libraries, by installing the psycopg2-binary package from PyPI:

$ pip install psycopg2-binary

The binary package is a practical choice for development and testing but in production it is advised to use the package built from sources.


回答 10

我在莫哈韦沙漠上遇到这个问题。Mojave不会创建psycopg2需要安装的/ usr / include目录。这并不明显。我在这里找到解决方案: 如何从命令行更新Xcode,该命令行引用:https : //forums.developer.apple.com/thread/104296

I was having this issue on Mojave. Mojave does not create a /usr/include directory, which psycopg2 needs to install. This was not obvious. I found the solution here: How to update Xcode from command line, which references: https://forums.developer.apple.com/thread/104296


回答 11

我安装了cython后遇到了同样的错误,并解决了这个问题

I had this same error and got it to resolve after I pip installed cython


回答 12

从conda环境运行PyCharm,使用以下方法解决了我的问题:

--> conda install psycopg2
The following packages will be UPDATED: ...

...
Proceed ([y]/n)? 
--> y
--> pip3 install psycopg2
Installing collected packages: psycopg2
Running setup.py install for psycopg2 ... done
Successfully installed psycopg2-2.8.4

'''

Running PyCharm from conda environment, solved my issue using:

--> conda install psycopg2
The following packages will be UPDATED: ...

...
Proceed ([y]/n)? 
--> y
--> pip3 install psycopg2
Installing collected packages: psycopg2
Running setup.py install for psycopg2 ... done
Successfully installed psycopg2-2.8.4

'''



回答 13

我使用MacPorts安装了OpenSSL,因此目录与Brew不同。

sudo port install openssl

我通过执行以下操作找到了目录:

port contents openssl | grep lib
port contents openssl | grep include

然后我导出变量:

export LDFLAGS="-L/opt/local/lib"
export CPPFLAGS="-I/opt/local/include/openssl"

您可能还必须:

xcode-select --install

I installed OpenSSL using MacPorts therefore directories are not like those of Brew.

sudo port install openssl

I found the directories by doing:

port contents openssl | grep lib
port contents openssl | grep include

Then I exported the variables:

export LDFLAGS="-L/opt/local/lib"
export CPPFLAGS="-I/opt/local/include/openssl"

You might also have to:

xcode-select --install

回答 14

export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/opt/openssl/lib/

为我工作

export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/opt/openssl/lib/

worked for me


回答 15

我设法通过使用以下方法修复了该问题:

brew unlink openssl && brew link openssl --force

我不确定这与之前尝试在OpenSSL上进行的brew卸载/升级有何不同。我的假设是,这些操作留下了一些“错误的”共享库,这些库无法正常工作。请注意,这还解决了安装python加密模块的问题。

I’ve managed to fix it by using:

brew unlink openssl && brew link openssl --force

I am not sure how this differs from the brew uninstall/upgrades that I did on OpenSSL in prior attempts I’ve made. My assumption is that these operations left some of the “faulty” shared libraries which were preventing this from working. Note that this also fixed issues with installing python cryptography module.


Django-DB-Migrations:无法更改表,因为它具有未决的触发事件

问题:Django-DB-Migrations:无法更改表,因为它具有未决的触发事件

我想从TextField中删除null = True:

-    footer=models.TextField(null=True, blank=True)
+    footer=models.TextField(blank=True, default='')

我创建了一个架构迁移:

manage.py schemamigration fooapp --auto

由于某些页脚列包含,NULL因此error在运行迁移时会得到以下信息:

django.db.utils.IntegrityError:“页脚”列包含空值

我将其添加到架构迁移中:

    for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
        sender.footer=''
        sender.save()

现在我得到:

django.db.utils.DatabaseError: cannot ALTER TABLE "fooapp_emailsender" because it has pending trigger events

怎么了?

I want to remove null=True from a TextField:

-    footer=models.TextField(null=True, blank=True)
+    footer=models.TextField(blank=True, default='')

I created a schema migration:

manage.py schemamigration fooapp --auto

Since some footer columns contain NULL I get this error if I run the migration:

django.db.utils.IntegrityError: column “footer” contains null values

I added this to the schema migration:

    for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
        sender.footer=''
        sender.save()

Now I get:

django.db.utils.DatabaseError: cannot ALTER TABLE "fooapp_emailsender" because it has pending trigger events

What is wrong?


回答 0

造成这种情况的另一个原因可能是因为您尝试将一列设置为NOT NULL实际上已经具有NULL值的时间。

Another reason for this maybe because you try to set a column to NOT NULL when it actually already has NULL values.


回答 1

每次迁移都在事务内部。在PostgreSQL中,您不得在一个事务中更新表然后更改表模式。

您需要拆分数据迁移和架构迁移。首先使用以下代码创建数据迁移:

 for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
    sender.footer=''
    sender.save()

然后创建架构迁移:

manage.py schemamigration fooapp --auto

现在,您有两个事务,并且应该在两个步骤中进行迁移。

Every migration is inside a transaction. In PostgreSQL you must not update the table and then alter the table schema in one transaction.

You need to split the data migration and the schema migration. First create the data migration with this code:

 for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
    sender.footer=''
    sender.save()

Then create the schema migration:

manage.py schemamigration fooapp --auto

Now you have two transactions and the migration in two steps should work.


回答 2

刚刚遇到这个问题。您还可以在模式迁移中使用db.start_transaction()和db.commit_transaction()将数据更改与模式更改分开。可能不那么干净,无法进行单独的数据迁移,但是在我的情况下,我需要架构,数据,然后再进行另一种架构迁移,因此我决定一次完成所有操作。

Have just hit this problem. You can also use db.start_transaction() and db.commit_transaction() in the schema migration to separate data changes from schema changes. Probably not so clean as to have a separate data migration but in my case I would need schema, data, and then another schema migration so I decided to do it all at once.


回答 3

在操作中,我将SET约束:

operations = [
    migrations.RunSQL('SET CONSTRAINTS ALL IMMEDIATE;'),
    migrations.RunPython(migration_func),
    migrations.RunSQL('SET CONSTRAINTS ALL DEFERRED;'),
]

At the operations I put SET CONSTRAINTS:

operations = [
    migrations.RunSQL('SET CONSTRAINTS ALL IMMEDIATE;'),
    migrations.RunPython(migration_func),
    migrations.RunSQL('SET CONSTRAINTS ALL DEFERRED;'),
]

回答 4

您正在更改列架构。该页脚列不能再包含空白值。该列中很可能已经有空白值存储在数据库中。Django将使用migration命令将数据库中的空白行从空白更新为现在的默认值。Django尝试更新页脚列为空值的行,并在看起来相同的同时更改架构(我不确定)。

问题是您无法更改试图同时更新其值的列模式。

一种解决方案是删除迁移文件以更新架构。然后,运行脚本以将所有这些值更新为默认值。然后重新运行迁移以更新架构。这样,更新已完成。Django迁移仅更改架构。

You are altering the column schema. That footer column can no longer contain a blank value. There are most likely blank values already stored in the DB for that column. Django is going to update those blank rows in your DB from blank to the now default value with the migrate command. Django tries to update the rows where footer column has a blank value and change the schema at the same time it seems (I’m not sure).

The problem is you can’t alter the same column schema you are trying to update the values for at the same time.

One solution would be to delete the migrations file updating the schema. Then, run a script to update all those values to your default value. Then re-run the migration to update the schema. This way, the update is already done. Django migration is only altering the schema.


如何在Django中设置PostgreSQL数据库

问题:如何在Django中设置PostgreSQL数据库

我是Python和Django的新手。

我正在使用PostgreSQL数据库引擎后端配置Django项目,但是每个数据库操作都出现错误。例如,当我跑步时manage.py syncdb,我得到:

C:\xampp\htdocs\djangodir>python manage.py syncdb
Traceback (most recent call last):
  File "manage.py", line 11, in <module>
    execute_manager(settings)
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
438, in execute_manager
    utility.execute()
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
379, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
261, in fetch_command
    klass = load_command_class(app_name, subcommand)
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
67, in load_command_class
    module = import_module('%s.management.commands.%s' % (app_name, name))
  File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 35, in im
port_module
    __import__(name)
  File "C:\Python27\lib\site-packages\django\core\management\commands\syncdb.py"
, line 7, in <module>
    from django.core.management.sql import custom_sql_for_model, emit_post_sync_
signal
  File "C:\Python27\lib\site-packages\django\core\management\sql.py", line 6, in
 <module>
    from django.db import models
  File "C:\Python27\lib\site-packages\django\db\__init__.py", line 77, in <modul
e>
    connection = connections[DEFAULT_DB_ALIAS]
  File "C:\Python27\lib\site-packages\django\db\utils.py", line 92, in __getitem
__
    backend = load_backend(db['ENGINE'])
  File "C:\Python27\lib\site-packages\django\db\utils.py", line 33, in load_back
end
    return import_module('.base', backend_name)
  File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 35, in im
port_module
    __import__(name)
  File "C:\Python27\lib\site-packages\django\db\backends\postgresql\base.py", li
ne 23, in <module>
    raise ImproperlyConfigured("Error loading psycopg module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg module: No mo
dule named psycopg

有人可以告诉我发生了什么吗?

I’m new to Python and Django.

I’m configuring a Django project using a PostgreSQL database engine backend, But I’m getting errors on each database operation. For example when I run manage.py syncdb, I’m getting:

C:\xampp\htdocs\djangodir>python manage.py syncdb
Traceback (most recent call last):
  File "manage.py", line 11, in <module>
    execute_manager(settings)
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
438, in execute_manager
    utility.execute()
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
379, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
261, in fetch_command
    klass = load_command_class(app_name, subcommand)
  File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
67, in load_command_class
    module = import_module('%s.management.commands.%s' % (app_name, name))
  File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 35, in im
port_module
    __import__(name)
  File "C:\Python27\lib\site-packages\django\core\management\commands\syncdb.py"
, line 7, in <module>
    from django.core.management.sql import custom_sql_for_model, emit_post_sync_
signal
  File "C:\Python27\lib\site-packages\django\core\management\sql.py", line 6, in
 <module>
    from django.db import models
  File "C:\Python27\lib\site-packages\django\db\__init__.py", line 77, in <modul
e>
    connection = connections[DEFAULT_DB_ALIAS]
  File "C:\Python27\lib\site-packages\django\db\utils.py", line 92, in __getitem
__
    backend = load_backend(db['ENGINE'])
  File "C:\Python27\lib\site-packages\django\db\utils.py", line 33, in load_back
end
    return import_module('.base', backend_name)
  File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 35, in im
port_module
    __import__(name)
  File "C:\Python27\lib\site-packages\django\db\backends\postgresql\base.py", li
ne 23, in <module>
    raise ImproperlyConfigured("Error loading psycopg module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg module: No mo
dule named psycopg

Can someone give me a clue on what is going on?


回答 0

您需要安装psycopg2Python库。

安装


下载http://initd.org/psycopg/,然后将其安装在Python PATH下

下载后,轻松解压缩tarball并:

$ python setup.py install

或者,如果愿意,可以通过easy_installpip进行安装。

我更喜欢无缘无故地使用pip而不是easy_install。

  • $ easy_install psycopg2
  • $ pip install psycopg2

组态


设置 .py中

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'db_name',                      
        'USER': 'db_user',
        'PASSWORD': 'db_user_password',
        'HOST': '',
        'PORT': 'db_port_number',
    }
}

-其他安装说明可在下载页面安装页面找到

You need to install psycopg2 Python library.

Installation


Download http://initd.org/psycopg/, then install it under Python PATH

After downloading, easily extract the tarball and:

$ python setup.py install

Or if you wish, install it by either easy_install or pip.

(I prefer to use pip over easy_install for no reason.)

  • $ easy_install psycopg2
  • $ pip install psycopg2

Configuration


in settings.py

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'db_name',                      
        'USER': 'db_user',
        'PASSWORD': 'db_user_password',
        'HOST': '',
        'PORT': 'db_port_number',
    }
}

– Other installation instructions can be found at download page and install page.


回答 1

另外,请确保已安装PostgreSQL开发包。在Ubuntu上,您需要执行以下操作:

$ sudo apt-get install libpq-dev

Also make sure you have the PostgreSQL development package installed. On Ubuntu you need to do something like this:

$ sudo apt-get install libpq-dev

回答 2

我使用的分步指南:

 - sudo apt-get install python-dev
 - sudo apt-get install postgresql-server-dev-9.1
 - sudo apt-get install python-psycopg2 - Or sudo pip install psycopg2

您可能需要安装图形工具来管理数据库,为此,您可以执行以下操作:

sudo apt-get install postgresql pgadmin3 

之后,您必须更改Postgre用户密码,然后执行以下操作:

 - sudo su
 - su postgres -c psql postgres
 - ALTER USER postgres WITH PASSWORD 'YourPassWordHere';
 - \q

在您的settings.py文件中,您可以执行以下操作:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'dbname',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': '',
        'PORT': '',
    }
}

额外:

如果要使用命令行创建数据库,则可以执行以下操作:

- sudo su
- su postgres -c psql postgres
- CREATE DATABASE dbname;
- CREATE USER djangouser WITH ENCRYPTED PASSWORD 'myPasswordHere';
- GRANT ALL PRIVILEGES ON DATABASE dbname TO djangouser;

在您的settings.py文件中,您可以执行以下操作:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'dbname',
        'USER': 'djangouser',
        'PASSWORD': 'myPasswordHere',
        'HOST': '',
        'PORT': '',
    }
}

Step by step that I use:

 - sudo apt-get install python-dev
 - sudo apt-get install postgresql-server-dev-9.1
 - sudo apt-get install python-psycopg2 - Or sudo pip install psycopg2

You may want to install a graphic tool to manage your databases, for that you can do:

sudo apt-get install postgresql pgadmin3 

After, you must change Postgre user password, then do:

 - sudo su
 - su postgres -c psql postgres
 - ALTER USER postgres WITH PASSWORD 'YourPassWordHere';
 - \q

On your settings.py file you do:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'dbname',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': '',
        'PORT': '',
    }
}

Extra:

If you want to create the db using the command line you can just do:

- sudo su
- su postgres -c psql postgres
- CREATE DATABASE dbname;
- CREATE USER djangouser WITH ENCRYPTED PASSWORD 'myPasswordHere';
- GRANT ALL PRIVILEGES ON DATABASE dbname TO djangouser;

On your settings.py file you do:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'dbname',
        'USER': 'djangouser',
        'PASSWORD': 'myPasswordHere',
        'HOST': '',
        'PORT': '',
    }
}

回答 3

这似乎有点冗长,但是对我来说没有任何错误。

首先,从Ubuntu软件中心安装phppgadmin。

然后在终端中运行这些步骤。

sudo apt-get install libpq-dev python-dev
pip install psycopg2
sudo apt-get install postgresql postgresql-contrib phppgadmin

启动apache服务器

sudo service apache2 start

现在在终端中也运行此命令以编辑apache文件。

sudo gedit /etc/apache2/apache2.conf

将以下行添加到打开的文件中:

Include /etc/apache2/conf.d/phppgadmin

现在重新加载apache。使用终端。

sudo /etc/init.d/apache2 reload

现在,您将不得不创建一个新的数据库。以“ postgres”用户身份登录。在终端继续。

sudo su - postgres

如果您在使用密码“ postgres”时遇到麻烦,可以在此处使用答案https://stackoverflow.com/a/12721020/1990793进行更改,然后继续执行步骤。

现在创建一个数据库

createdb <db_name>

现在,创建一个新用户以稍后登录到phppgadmin,并提供一个新密码。

createuser -P <new_user>

现在您的postgressql已设置完毕,您可以转到:

http://localhost/phppgadmin/

并使用您创建的新用户登录,以查看数据库。

This may seem a bit lengthy, but it worked for me without any error.

At first, Install phppgadmin from Ubuntu Software Center.

Then run these steps in terminal.

sudo apt-get install libpq-dev python-dev
pip install psycopg2
sudo apt-get install postgresql postgresql-contrib phppgadmin

Start the apache server

sudo service apache2 start

Now run this too in terminal, to edit the apache file.

sudo gedit /etc/apache2/apache2.conf

Add the following line to the opened file:

Include /etc/apache2/conf.d/phppgadmin

Now reload apache. Use terminal.

sudo /etc/init.d/apache2 reload

Now you will have to create a new database. Login as ‘postgres’ user. Continue in terminal.

sudo su - postgres

In case you have trouble with the password of ‘postgres’, you can change it using the answer here https://stackoverflow.com/a/12721020/1990793 and continue with the steps.

Now create a database

createdb <db_name>

Now create a new user to login to phppgadmin later, providing a new password.

createuser -P <new_user>

Now your postgressql has been setup, and you can go to:

http://localhost/phppgadmin/

and login using the new user you’ve created, in order to view the database.


回答 4

您可以使用以下命令安装“ psycopg”:

# sudo easy_install psycopg2

另外,您可以使用pip:

# pip install psycopg2

easy_install和pip包含在ActivePython中,或从各个 项目站点手动安装。

或者,只需获取预构建的Windows安装程序

You can install “psycopg” with the following command:

# sudo easy_install psycopg2

Alternatively, you can use pip :

# pip install psycopg2

easy_install and pip are included with ActivePython, or manually installed from the respective project sites.

Or, simply get the pre-built Windows installer.


回答 5

眼前的问题似乎是您缺少了psycopg模块。

The immediate problem seems to be that you’re missing the psycopg module.


回答 6

如果您使用的是Fedora 20,Django 1.6.5,postgresql 9.3。*,并且需要psycopg2模块,请执行以下操作:

yum install postgresql-devel
easy_install psycopg2

如果您像我一样,可能会找不到有据可查的libpq-dev rpm。

If you are using Fedora 20, Django 1.6.5, postgresql 9.3.* and you need the psycopg2 module, do this:

yum install postgresql-devel
easy_install psycopg2

If you are like me, you may have trouble finding the well documented libpq-dev rpm… The above worked for me just now.


回答 7

我在Mac上遇到过同样的问题。

解决方案是仅使用PIP来安装所有东西,并触摸一些东西。

首先从以下 网址安装PIP:https //pip.pypa.io/en/latest/

然后,您要确定pg_config的路径是否在PATH中(回显$ PATH),如果没有,则可以编辑bash_profile:

vi /Users/<user>/.bash_profile

并添加以下行:

export PATH=$PATH:/path/to/pg_config/bin

如果您不知道pg_config在哪里,则可以使用“ locate”工具,但是请确保您的locate.db是最新的(我使用的是旧的locate.db,并且使用的路径不存在)。

sudo /usr/libexec/locate.updatedb
locate pg_config

然后安装Django(如果需要)和psycopg2。

sudo pip install Django
sudo pip install psycopg2

然后在settings.py(localhost:defaultport)中

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'dbname',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': '',
        'PORT': '',
    }
}

问候!

I was having the same Issue on Mac.

The solution was to use only PIP to install everything, and touch some things.

First install PIP from: https://pip.pypa.io/en/latest/

Then you want to make sure if path to pg_config is in your PATH (echo $PATH), if not you can edit your bash_profile:

vi /Users/<user>/.bash_profile

and add this line:

export PATH=$PATH:/path/to/pg_config/bin

If you don’t know where pg_config is you can use the “locate” tool, but be sure your locate.db is up to date (i was using an old locate.db and using paths that does not exists).

sudo /usr/libexec/locate.updatedb
locate pg_config

Then install Django (if needed) and psycopg2.

sudo pip install Django
sudo pip install psycopg2

And then in settings.py (localhost:defaultport)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'dbname',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': '',
        'PORT': '',
    }
}

Greets!


回答 8

$ sudo apt-get install libpq-dev

年,这解决了我的问题。执行此操作后,请执行以下操作:pip install psycopg2

$ sudo apt-get install libpq-dev

Year, this solve my problem. After execute this, do: pip install psycopg2


回答 9

请注意,psycopg2通过pip或setup.py进行安装需要具有Visual Studio 2008(更确切地说是可执行文件vcvarsall.bat)。如果您没有管理员权限来安装它或在Windows上设置适当的PATH变量,则可以从此处下载已编译的库。

Please note that installation of psycopg2 via pip or setup.py requires to have Visual Studio 2008 (more precisely executable file vcvarsall.bat). If you don’t have admin rights to install it or set the appropriate PATH variable on Windows, you can download already compiled library from here.


回答 10

这是PostgreSQL在ubuntu服务器中设置的非常好的逐步过程之一。我已经尝试了它,Ubuntu 16.04并且可以正常工作。

https://www.digitalocean.com/community/tutorials/how-to-use-postgresql-with-your-django-application-on-ubuntu-14-04

This is one of the very good and step by step process to set up PostgreSQL in ubuntu server. I have tried it with Ubuntu 16.04 and its working.

https://www.digitalocean.com/community/tutorials/how-to-use-postgresql-with-your-django-application-on-ubuntu-14-04


如何将DataFrame写入postgres表?

问题:如何将DataFrame写入postgres表?

DataFrame.to_sql方法,但仅适用于mysql,sqlite和oracle数据库。我无法传递给此方法postgres连接或sqlalchemy引擎。

There is DataFrame.to_sql method, but it works only for mysql, sqlite and oracle databases. I cant pass to this method postgres connection or sqlalchemy engine.


回答 0

从pandas 0.14(2014年5月发布)开始,支持postgresql。该sql模块现在用于sqlalchemy支持不同的数据库风格。您可以为PostgreSQL数据库传递sqlalchemy引擎(请参阅docs)。例如:

from sqlalchemy import create_engine
engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
df.to_sql('table_name', engine)

您是正确的,在不支持0.13.1版本的熊猫中,不支持postgresql。如果您需要使用旧版本的熊猫,请使用以下修补版本pandas.io.sqlhttps : //gist.github.com/jorisvandenbossche/10841234
我是在前一段时间写的,所以不能完全保证它始终有效,但是基础应该在那里)。如果将该文件放在工作目录中并导入,那么您应该能够执行此操作(conPostgreSQL连接在哪里):

import sql  # the patched version (file is named sql.py)
sql.write_frame(df, 'table_name', con, flavor='postgresql')

Starting from pandas 0.14 (released end of May 2014), postgresql is supported. The sql module now uses sqlalchemy to support different database flavors. You can pass a sqlalchemy engine for a postgresql database (see docs). E.g.:

from sqlalchemy import create_engine
engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')
df.to_sql('table_name', engine)

You are correct that in pandas up to version 0.13.1 postgresql was not supported. If you need to use an older version of pandas, here is a patched version of pandas.io.sql: https://gist.github.com/jorisvandenbossche/10841234.
I wrote this a time ago, so cannot fully guarantee that it always works, buth the basis should be there). If you put that file in your working directory and import it, then you should be able to do (where con is a postgresql connection):

import sql  # the patched version (file is named sql.py)
sql.write_frame(df, 'table_name', con, flavor='postgresql')

回答 1

更快的选择:

以下代码比df.to_sql方法将您的Pandas DF复制到postgres DB的速度要快得多,并且您不需要任何中间的csv文件来存储df。

根据数据库规范创建引擎。

在您的postgres DB中创建一个表,该表的列数与Dataframe(df)相同。

DF中的数据将插入到您的postgres表中。

from sqlalchemy import create_engine
import psycopg2 
import io

如果要替换表,可以使用df中的标头将其替换为普通的to_sql方法,然后将整个耗时的df加载到DB中。

engine = create_engine('postgresql+psycopg2://username:password@host:port/database')

df.head(0).to_sql('table_name', engine, if_exists='replace',index=False) #truncates the table

conn = engine.raw_connection()
cur = conn.cursor()
output = io.StringIO()
df.to_csv(output, sep='\t', header=False, index=False)
output.seek(0)
contents = output.getvalue()
cur.copy_from(output, 'table_name', null="") # null values become ''
conn.commit()

Faster option:

The following code will copy your Pandas DF to postgres DB much faster than df.to_sql method and you won’t need any intermediate csv file to store the df.

Create an engine based on your DB specifications.

Create a table in your postgres DB that has equal number of columns as the Dataframe (df).

Data in DF will get inserted in your postgres table.

from sqlalchemy import create_engine
import psycopg2 
import io

if you want to replace the table, we can replace it with normal to_sql method using headers from our df and then load the entire big time consuming df into DB.

engine = create_engine('postgresql+psycopg2://username:password@host:port/database')

df.head(0).to_sql('table_name', engine, if_exists='replace',index=False) #truncates the table

conn = engine.raw_connection()
cur = conn.cursor()
output = io.StringIO()
df.to_csv(output, sep='\t', header=False, index=False)
output.seek(0)
contents = output.getvalue()
cur.copy_from(output, 'table_name', null="") # null values become ''
conn.commit()

回答 2

熊猫0.24.0+解决方案

在Pandas 0.24.0中引入了一个新功能,该功能是专为快速写入Postgres设计的。您可以在此处了解更多信息:https : //pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-sql-method

import csv
from io import StringIO

from sqlalchemy import create_engine

def psql_insert_copy(table, conn, keys, data_iter):
    # gets a DBAPI connection that can provide a cursor
    dbapi_conn = conn.connection
    with dbapi_conn.cursor() as cur:
        s_buf = StringIO()
        writer = csv.writer(s_buf)
        writer.writerows(data_iter)
        s_buf.seek(0)

        columns = ', '.join('"{}"'.format(k) for k in keys)
        if table.schema:
            table_name = '{}.{}'.format(table.schema, table.name)
        else:
            table_name = table.name

        sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
            table_name, columns)
        cur.copy_expert(sql=sql, file=s_buf)

engine = create_engine('postgresql://myusername:mypassword@myhost:5432/mydatabase')
df.to_sql('table_name', engine, method=psql_insert_copy)

Pandas 0.24.0+ solution

In Pandas 0.24.0 a new feature was introduced specifically designed for fast writes to Postgres. You can learn more about it here: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-sql-method

import csv
from io import StringIO

from sqlalchemy import create_engine

def psql_insert_copy(table, conn, keys, data_iter):
    # gets a DBAPI connection that can provide a cursor
    dbapi_conn = conn.connection
    with dbapi_conn.cursor() as cur:
        s_buf = StringIO()
        writer = csv.writer(s_buf)
        writer.writerows(data_iter)
        s_buf.seek(0)

        columns = ', '.join('"{}"'.format(k) for k in keys)
        if table.schema:
            table_name = '{}.{}'.format(table.schema, table.name)
        else:
            table_name = table.name

        sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
            table_name, columns)
        cur.copy_expert(sql=sql, file=s_buf)

engine = create_engine('postgresql://myusername:mypassword@myhost:5432/mydatabase')
df.to_sql('table_name', engine, method=psql_insert_copy)

回答 3

这就是我做的。

可能更快,因为它正在使用execute_batch

# df is the dataframe
if len(df) > 0:
    df_columns = list(df)
    # create (col1,col2,...)
    columns = ",".join(df_columns)

    # create VALUES('%s', '%s",...) one '%s' per column
    values = "VALUES({})".format(",".join(["%s" for _ in df_columns])) 

    #create INSERT INTO table (columns) VALUES('%s',...)
    insert_stmt = "INSERT INTO {} ({}) {}".format(table,columns,values)

    cur = conn.cursor()
    psycopg2.extras.execute_batch(cur, insert_stmt, df.values)
    conn.commit()
    cur.close()

This is how I did it.

It may be faster because it is using execute_batch:

# df is the dataframe
if len(df) > 0:
    df_columns = list(df)
    # create (col1,col2,...)
    columns = ",".join(df_columns)

    # create VALUES('%s', '%s",...) one '%s' per column
    values = "VALUES({})".format(",".join(["%s" for _ in df_columns])) 

    #create INSERT INTO table (columns) VALUES('%s',...)
    insert_stmt = "INSERT INTO {} ({}) {}".format(table,columns,values)

    cur = conn.cursor()
    psycopg2.extras.execute_batch(cur, insert_stmt, df.values)
    conn.commit()
    cur.close()

回答 4

对于Python 2.7和Pandas 0.24.2并使用Psycopg2

Psycopg2连接模块

def dbConnect (db_parm, username_parm, host_parm, pw_parm):
    # Parse in connection information
    credentials = {'host': host_parm, 'database': db_parm, 'user': username_parm, 'password': pw_parm}
    conn = psycopg2.connect(**credentials)
    conn.autocommit = True  # auto-commit each entry to the database
    conn.cursor_factory = RealDictCursor
    cur = conn.cursor()
    print ("Connected Successfully to DB: " + str(db_parm) + "@" + str(host_parm))
    return conn, cur

连接到数据库

conn, cur = dbConnect(databaseName, dbUser, dbHost, dbPwd)

假设数据帧已经作为df存在

output = io.BytesIO() # For Python3 use StringIO
df.to_csv(output, sep='\t', header=True, index=False)
output.seek(0) # Required for rewinding the String object
copy_query = "COPY mem_info FROM STDOUT csv DELIMITER '\t' NULL ''  ESCAPE '\\' HEADER "  # Replace your table name in place of mem_info
cur.copy_expert(copy_query, output)
conn.commit()

For Python 2.7 and Pandas 0.24.2 and using Psycopg2

Psycopg2 Connection Module

def dbConnect (db_parm, username_parm, host_parm, pw_parm):
    # Parse in connection information
    credentials = {'host': host_parm, 'database': db_parm, 'user': username_parm, 'password': pw_parm}
    conn = psycopg2.connect(**credentials)
    conn.autocommit = True  # auto-commit each entry to the database
    conn.cursor_factory = RealDictCursor
    cur = conn.cursor()
    print ("Connected Successfully to DB: " + str(db_parm) + "@" + str(host_parm))
    return conn, cur

Connect to the database

conn, cur = dbConnect(databaseName, dbUser, dbHost, dbPwd)

Assuming dataframe to be present already as df

output = io.BytesIO() # For Python3 use StringIO
df.to_csv(output, sep='\t', header=True, index=False)
output.seek(0) # Required for rewinding the String object
copy_query = "COPY mem_info FROM STDOUT csv DELIMITER '\t' NULL ''  ESCAPE '\\' HEADER "  # Replace your table name in place of mem_info
cur.copy_expert(copy_query, output)
conn.commit()

Python / postgres / psycopg2:获取刚刚插入的行的ID

问题:Python / postgres / psycopg2:获取刚刚插入的行的ID

我正在使用Python和psycopg2连接到Postgres。

当我插入一行时…

sql_string = "INSERT INTO hundred (name,name_slug,status) VALUES ("
sql_string += hundred_name + ", '" + hundred_slug + "', " + status + ");"
cursor.execute(sql_string)

…如何获取刚插入的行的ID?试:

hundred = cursor.fetchall() 

使用时返回错误RETURNING id

sql_string = "INSERT INTO domes_hundred (name,name_slug,status) VALUES ("
sql_string += hundred_name + ", '" + hundred_slug + "', " + status + ") RETURNING id;"
hundred = cursor.execute(sql_string)

简单地返回None

更新:也是currval(即使直接在postgres中使用此命令也可以):

sql_string = "SELECT currval(pg_get_serial_sequence('hundred', 'id'));"
hundred_id = cursor.execute(sql_string)

有人可以建议吗?

谢谢!

I’m using Python and psycopg2 to interface to postgres.

When I insert a row…

sql_string = "INSERT INTO hundred (name,name_slug,status) VALUES ("
sql_string += hundred_name + ", '" + hundred_slug + "', " + status + ");"
cursor.execute(sql_string)

… how do I get the ID of the row I’ve just inserted? Trying:

hundred = cursor.fetchall() 

returns an error, while using RETURNING id:

sql_string = "INSERT INTO domes_hundred (name,name_slug,status) VALUES ("
sql_string += hundred_name + ", '" + hundred_slug + "', " + status + ") RETURNING id;"
hundred = cursor.execute(sql_string)

simply returns None.

UPDATE: So does currval (even though using this command directly into postgres works):

sql_string = "SELECT currval(pg_get_serial_sequence('hundred', 'id'));"
hundred_id = cursor.execute(sql_string)

Can anyone advise?

thanks!


回答 0

cursor.execute("INSERT INTO .... RETURNING id")
id_of_new_row = cursor.fetchone()[0]

并且,请不要手动构建包含值的SQL字符串。您可以(并且应该!)分别传递值,从而不必进行转义和SQL注入:

sql_string = "INSERT INTO domes_hundred (name,name_slug,status) VALUES (%s,%s,%s) RETURNING id;"
cursor.execute(sql_string, (hundred_name, hundred_slug, status))
hundred = cursor.fetchone()[0]

有关更多详细信息,请参见psycopg文档:http ://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries

cursor.execute("INSERT INTO .... RETURNING id")
id_of_new_row = cursor.fetchone()[0]

And please do not build SQL strings containing values manually. You can (and should!) pass values separately, making it unnecessary to escape and SQL injection impossible:

sql_string = "INSERT INTO domes_hundred (name,name_slug,status) VALUES (%s,%s,%s) RETURNING id;"
cursor.execute(sql_string, (hundred_name, hundred_slug, status))
hundred = cursor.fetchone()[0]

See the psycopg docs for more details: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries


回答 1

我到这里结束是因为我有一个类似的问题,但是我们使用的是Postgres-XC,它尚不支持RETURNING ID子句。在这种情况下,您可以使用:

cursor.execute('INSERT INTO ........')
cursor.execute('SELECT LASTVAL()')
lastid = cursor.fetchone()['lastval']

以防万一对任何人都有用!

I ended up here because I had a similar problem, but we’re using Postgres-XC, which doesn’t yet support the RETURNING ID clause. In that case you can use:

cursor.execute('INSERT INTO ........')
cursor.execute('SELECT LASTVAL()')
lastid = cursor.fetchone()['lastval']

Just in case it was useful for anyone!


回答 2


奇怪的SQLAlchemy错误消息:TypeError:’dict’对象不支持索引

问题:奇怪的SQLAlchemy错误消息:TypeError:’dict’对象不支持索引

我正在使用手工SQL,使用SqlAlchemy从PG数据库中获取数据。我正在尝试一个查询,其中包含类似运算符’%’的SQL,并且该查询似乎通过循环抛出SqlAlcjhemy:

sql = """
       SELECT DISTINCT u.name from user u
        INNER JOIN city c ON u.city_id = c.id
        WHERE c.designation=upper('fantasy') 
        AND c.id IN (select id from ref_geog where short_name LIKE '%opt')
      """

# The last line in the above statement throws the error mentioned in the title. 
# However if the last line is change to:
# AND c.id IN (select id from ref_geog where short_name = 'helloopt')
# the script runs correctly.
#
# I also tried double escaping the '%' i.e. using '%%' instead - that generated the same error as previously.

connectDb()
res = executeSql(sql)
print res
closeDbConnection()

谁知道是什么原因导致了这种误导性错误消息,以及我该如何解决?

[[编辑]]

在任何人问之前,上面所包含的功能没有什么特别或特别的地方。例如,函数executeSql()仅调用conn.execute(sql)并返回结果。变量conn只是先前建立的与数据库的连接。

I am using hand crafted SQL to fetch data from a PG database, using SqlAlchemy. I am trying a query which contains the SQL like operator ‘%’ and that seems to throw SqlAlcjhemy through a loop:

sql = """
       SELECT DISTINCT u.name from user u
        INNER JOIN city c ON u.city_id = c.id
        WHERE c.designation=upper('fantasy') 
        AND c.id IN (select id from ref_geog where short_name LIKE '%opt')
      """

# The last line in the above statement throws the error mentioned in the title. 
# However if the last line is change to:
# AND c.id IN (select id from ref_geog where short_name = 'helloopt')
# the script runs correctly.
#
# I also tried double escaping the '%' i.e. using '%%' instead - that generated the same error as previously.

connectDb()
res = executeSql(sql)
print res
closeDbConnection()

Any one knows what is causing this misleading error message and how I may fix it?

[[Edit]]

Before any one asks, there is nothing special or fancy about the functions included above. For example the function executeSql() simply invokes conn.execute(sql) and returns the results. The variable conn is simply the previously established connection to the database.


回答 0

您必须%%使用它,%因为%在python中它被用作字符串格式,因此当您编写单个时,%它会假定您将用此值替换某些值。

因此,当您要%在查询中始终将string放置为all时,请放置double %

You have to give %% to use it as % because % in python is use as string formatting so when you write single % its assume that you are going to replace some value with this.

So when you want to place single % in string with query allways place double %.


回答 1

SQLAlchemy具有text()包装文本的功能,该文本似乎可以为您正确地转义SQL。

res = executeSql(sqlalchemy.text(sql))

应该可以为您工作,并使您不必进行手动转义。

SQLAlchemy has a text() function for wrapping text which appears to correctly escape the SQL for you.

I.e.

res = executeSql(sqlalchemy.text(sql))

should work for you and save you from having to do the manual escaping.


回答 2

我在sqlalchemy 1.2版文档中找不到“ executeSql” ,但以下代码对我有用

engine.execute(sqlalchemy.text(sql_query))

I cannot find the “executeSql” in sqlalchemy version 1.2 docs , but the below line worked for me

engine.execute(sqlalchemy.text(sql_query))

回答 3

看来您的问题可能与此bug有关

在这种情况下,您应该使用三倍转义来解决。

It seems like your problem may be related to this bug.

In which case, you should triple-escape as a workaround.


回答 4

当出现此错误时,我发现了另一种情况:

c.execute("SELECT * FROM t WHERE a = %s")

换句话说,如果您%s在查询中提供参数(),但您忘记添加查询参数。在这种情况下,错误消息非常容易引起误解。

I found one more case when this error shows up:

c.execute("SELECT * FROM t WHERE a = %s")

In other words, if you provide parameter (%s) in query, but you forget to add query params. In this case error message is very misleading.


回答 5

还有一个注意事项-您还必须转义(或删除)%注释中的字符。不幸的是,sqlalchemy.text(query_string)不能逃脱评论中的百分号。

One more note- you must escape (or delete) % characters in comments as well. Unfortunately, sqlalchemy.text(query_string) does not escape the percent signs in the comments.


回答 6

如果您不想转义%字符或使用sqlalchemy.text(),则解决问题的另一种方法是使用正则表达式。

代替:

select id from ref_geog where short_name LIKE '%opt'

尝试(区分大小写的匹配):

select id from ref_geog where short_name ~ 'opt$' 

或(不区分大小写):

select id from ref_geog where short_name ~* 'opt$'

两个LIKE和正则表达式被覆盖中模式匹配的文档

注意:

与LIKE模式不同,除非将正则表达式明确地锚定到字符串的开头或结尾,否则允许在字符串中的任何位置匹配正则表达式。

对于锚,可以将断言$用于字符串的结尾(或^开头)。

Another way of solving your problem, if you don’t want to escape % characters or use sqlalchemy.text(), is to use a regular expression.

Instead of:

select id from ref_geog where short_name LIKE '%opt'

Try (for case-sensitive match):

select id from ref_geog where short_name ~ 'opt$' 

or (for case-insensitive):

select id from ref_geog where short_name ~* 'opt$'

Both LIKE and regex are covered in the documentation on pattern matching.

Note that:

Unlike LIKE patterns, a regular expression is allowed to match anywhere within a string, unless the regular expression is explicitly anchored to the beginning or end of the string.

For an anchor, you can use the assertion $ for end of string (or ^ for beginning).


回答 7

这也可能是由于这种情况-如果要在DICT formate中声明要传递到SQL的参数,并在SQL中以LIST或TUPPLE的形式对其进行操作。

This could also result from the case – in case parameters to be passed onto the SQL are declared in DICT formate and are being manipulated in the SQL in the form of LIST or TUPPLE.


psycopg2:通过一个查询插入多行

问题:psycopg2:通过一个查询插入多行

我需要用一个查询插入多行(行数不是常数),所以我需要像这样执行查询:

INSERT INTO t (a, b) VALUES (1, 2), (3, 4), (5, 6);

我知道的唯一方法是

args = [(1,2), (3,4), (5,6)]
args_str = ','.join(cursor.mogrify("%s", (x, )) for x in args)
cursor.execute("INSERT INTO t (a, b) VALUES "+args_str)

但我想要一些更简单的方法。

I need to insert multiple rows with one query (number of rows is not constant), so I need to execute query like this one:

INSERT INTO t (a, b) VALUES (1, 2), (3, 4), (5, 6);

The only way I know is

args = [(1,2), (3,4), (5,6)]
args_str = ','.join(cursor.mogrify("%s", (x, )) for x in args)
cursor.execute("INSERT INTO t (a, b) VALUES "+args_str)

but I want some simpler way.


回答 0

我构建了一个程序,该程序将多行插入到位于另一个城市的服务器上。

我发现使用这种方法的速度大约是10倍executemany。就我而言,tup是一个包含约2000行的元组。使用此方法大约花了10秒钟:

args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup)
cur.execute("INSERT INTO table VALUES " + args_str) 

使用此方法需要2分钟:

cur.executemany("INSERT INTO table VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s)", tup)

I built a program that inserts multiple lines to a server that was located in another city.

I found out that using this method was about 10 times faster than executemany. In my case tup is a tuple containing about 2000 rows. It took about 10 seconds when using this method:

args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup)
cur.execute("INSERT INTO table VALUES " + args_str) 

and 2 minutes when using this method:

cur.executemany("INSERT INTO table VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s)", tup)

回答 1

Psycopg 2.7中的新execute_values方法

data = [(1,'x'), (2,'y')]
insert_query = 'insert into t (a, b) values %s'
psycopg2.extras.execute_values (
    cursor, insert_query, data, template=None, page_size=100
)

在Psycopg 2.6中执行此操作的pythonic方法:

data = [(1,'x'), (2,'y')]
records_list_template = ','.join(['%s'] * len(data))
insert_query = 'insert into t (a, b) values {}'.format(records_list_template)
cursor.execute(insert_query, data)

说明:如果要插入的数据作为元组列表给出,例如

data = [(1,'x'), (2,'y')]

那么它已经是确切要求的格式了

  1. 该子句的values语法insert需要一个记录列表,如

    insert into t (a, b) values (1, 'x'),(2, 'y')

  2. Psycopg使Python适应tuplePostgresql record

唯一必要的工作是提供一个由psycopg填充的记录列表模板

# We use the data list to be sure of the template length
records_list_template = ','.join(['%s'] * len(data))

并将其放在insert查询中

insert_query = 'insert into t (a, b) values {}'.format(records_list_template)

打印insert_query输出

insert into t (a, b) values %s,%s

现在到通常的Psycopg参数替换

cursor.execute(insert_query, data)

或者只是测试将要发送到服务器的内容

print (cursor.mogrify(insert_query, data).decode('utf8'))

输出:

insert into t (a, b) values (1, 'x'),(2, 'y')

New execute_values method in Psycopg 2.7:

data = [(1,'x'), (2,'y')]
insert_query = 'insert into t (a, b) values %s'
psycopg2.extras.execute_values (
    cursor, insert_query, data, template=None, page_size=100
)

The pythonic way of doing it in Psycopg 2.6:

data = [(1,'x'), (2,'y')]
records_list_template = ','.join(['%s'] * len(data))
insert_query = 'insert into t (a, b) values {}'.format(records_list_template)
cursor.execute(insert_query, data)

Explanation: If the data to be inserted is given as a list of tuples like in

data = [(1,'x'), (2,'y')]

then it is already in the exact required format as

  1. the values syntax of the insert clause expects a list of records as in

    insert into t (a, b) values (1, 'x'),(2, 'y')

  2. Psycopg adapts a Python tuple to a Postgresql record.

The only necessary work is to provide a records list template to be filled by psycopg

# We use the data list to be sure of the template length
records_list_template = ','.join(['%s'] * len(data))

and place it in the insert query

insert_query = 'insert into t (a, b) values {}'.format(records_list_template)

Printing the insert_query outputs

insert into t (a, b) values %s,%s

Now to the usual Psycopg arguments substitution

cursor.execute(insert_query, data)

Or just testing what will be sent to the server

print (cursor.mogrify(insert_query, data).decode('utf8'))

Output:

insert into t (a, b) values (1, 'x'),(2, 'y')

回答 2

使用psycopg2 2.7更新:

executemany()以下线程中所述,经典版本比@ ant32的实现(称为“折叠”)慢约60倍:https ://www.postgresql.org/message-id/20170130215151.GA7081%40deb76.aryehleib.com

此实现已在2.7版中添加到psycopg2中,称为execute_values()

from psycopg2.extras import execute_values
execute_values(cur,
    "INSERT INTO test (id, v1, v2) VALUES %s",
    [(1, 2, 3), (4, 5, 6), (7, 8, 9)])

上一个答案:

要插入多行,使用multirow VALUES语法execute()比使用psycopg2快约10倍executemany()。确实,executemany()只是运行许多单独的INSERT语句。

@ ant32的代码在Python 2中可以完美地工作。但是在Python 3中,cursor.mogrify()返回字节,cursor.execute()采用字节或字符串并','.join()需要str实例。

因此,在Python 3中,您可能需要通过添加.decode('utf-8')以下内容来修改@ ant32的代码:

args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x).decode('utf-8') for x in tup)
cur.execute("INSERT INTO table VALUES " + args_str)

或仅使用字节(带有b''b""):

args_bytes = b','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup)
cur.execute(b"INSERT INTO table VALUES " + args_bytes) 

Update with psycopg2 2.7:

The classic executemany() is about 60 times slower than @ant32 ‘s implementation (called “folded”) as explained in this thread: https://www.postgresql.org/message-id/20170130215151.GA7081%40deb76.aryehleib.com

This implementation was added to psycopg2 in version 2.7 and is called execute_values():

from psycopg2.extras import execute_values
execute_values(cur,
    "INSERT INTO test (id, v1, v2) VALUES %s",
    [(1, 2, 3), (4, 5, 6), (7, 8, 9)])

Previous Answer:

To insert multiple rows, using the multirow VALUES syntax with execute() is about 10x faster than using psycopg2 executemany(). Indeed, executemany() just runs many individual INSERT statements.

@ant32 ‘s code works perfectly in Python 2. But in Python 3, cursor.mogrify() returns bytes, cursor.execute() takes either bytes or strings, and ','.join() expects str instance.

So in Python 3 you may need to modify @ant32 ‘s code, by adding .decode('utf-8'):

args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x).decode('utf-8') for x in tup)
cur.execute("INSERT INTO table VALUES " + args_str)

Or by using bytes (with b'' or b"") only:

args_bytes = b','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup)
cur.execute(b"INSERT INTO table VALUES " + args_bytes) 

回答 3

cursor.copy_from是迄今为止我发现的用于批量插入的最快解决方案。这是编写的要点,其中包含一个名为IteratorFile的类,该类允许迭代器产生像文件一样读取字符串。我们可以使用生成器表达式将每个输入记录转换为字符串。所以解决方案是

args = [(1,2), (3,4), (5,6)]
f = IteratorFile(("{}\t{}".format(x[0], x[1]) for x in args))
cursor.copy_from(f, 'table_name', columns=('a', 'b'))

对于这个琐碎的args来说,速度差异不会太大,但是在处理数千行时,我看到了很大的加速。与构建巨大的查询字符串相比,它还将提高内存效率。迭代器一次只能在内存中保存一个输入记录,在某个时候,通过构建查询字符串,您的Python进程或Postgres中的内存将用完。

cursor.copy_from is the fastest solution I’ve found for bulk inserts by far. Here’s a gist I made containing a class named IteratorFile which allows an iterator yielding strings to be read like a file. We can convert each input record to a string using a generator expression. So the solution would be

args = [(1,2), (3,4), (5,6)]
f = IteratorFile(("{}\t{}".format(x[0], x[1]) for x in args))
cursor.copy_from(f, 'table_name', columns=('a', 'b'))

For this trivial size of args it won’t make much of a speed difference, but I see big speedups when dealing with thousands+ of rows. It will also be more memory efficient than building a giant query string. An iterator would only ever hold one input record in memory at a time, where at some point you’ll run out of memory in your Python process or in Postgres by building the query string.


回答 4

来自Postgresql.org的 Psycopg2的教程页面的摘要(请参见底部)

我想向您展示的最后一项是如何使用字典插入多行。如果您具有以下条件:

namedict = ({"first_name":"Joshua", "last_name":"Drake"},
            {"first_name":"Steven", "last_name":"Foo"},
            {"first_name":"David", "last_name":"Bar"})

您可以使用以下命令轻松地将所有三行插入字典中:

cur = conn.cursor()
cur.executemany("""INSERT INTO bar(first_name,last_name) VALUES (%(first_name)s, %(last_name)s)""", namedict)

它不会节省太多代码,但是绝对可以看起来更好。

A snippet from Psycopg2’s tutorial page at Postgresql.org (see bottom):

A last item I would like to show you is how to insert multiple rows using a dictionary. If you had the following:

namedict = ({"first_name":"Joshua", "last_name":"Drake"},
            {"first_name":"Steven", "last_name":"Foo"},
            {"first_name":"David", "last_name":"Bar"})

You could easily insert all three rows within the dictionary by using:

cur = conn.cursor()
cur.executemany("""INSERT INTO bar(first_name,last_name) VALUES (%(first_name)s, %(last_name)s)""", namedict)

It doesn’t save much code, but it definitively looks better.


回答 5

所有这些技术在Postgres术语中都称为“扩展插入”,截至2016年11月24日,它仍然比psychopg2的executemany()和该线程中列出的所有其他方法快了很多(我在尝试此方法之前曾尝试过)回答)。

这是一些不使用cur.mogrify的代码,它很不错,只是可以帮助您:

valueSQL = [ '%s', '%s', '%s', ... ] # as many as you have columns.
sqlrows = []
rowsPerInsert = 3 # more means faster, but with diminishing returns..
for row in getSomeData:
        # row == [1, 'a', 'yolo', ... ]
        sqlrows += row
        if ( len(sqlrows)/len(valueSQL) ) % rowsPerInsert == 0:
                # sqlrows == [ 1, 'a', 'yolo', 2, 'b', 'swag', 3, 'c', 'selfie' ]
                insertSQL = 'INSERT INTO "twitter" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*rowsPerInsert)
                cur.execute(insertSQL, sqlrows)
                con.commit()
                sqlrows = []
insertSQL = 'INSERT INTO "twitter" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*len(sqlrows))
cur.execute(insertSQL, sqlrows)
con.commit()

但应注意,如果可以使用copy_from(),则应该使用copy_from;)

All of these techniques are called ‘Extended Inserts” in Postgres terminology, and as of the 24th of November 2016, it’s still a ton faster than psychopg2’s executemany() and all the other methods listed in this thread (which i tried before coming to this answer).

Here’s some code which doesnt use cur.mogrify and is nice and simply to get your head around:

valueSQL = [ '%s', '%s', '%s', ... ] # as many as you have columns.
sqlrows = []
rowsPerInsert = 3 # more means faster, but with diminishing returns..
for row in getSomeData:
        # row == [1, 'a', 'yolo', ... ]
        sqlrows += row
        if ( len(sqlrows)/len(valueSQL) ) % rowsPerInsert == 0:
                # sqlrows == [ 1, 'a', 'yolo', 2, 'b', 'swag', 3, 'c', 'selfie' ]
                insertSQL = 'INSERT INTO "twitter" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*rowsPerInsert)
                cur.execute(insertSQL, sqlrows)
                con.commit()
                sqlrows = []
insertSQL = 'INSERT INTO "twitter" VALUES ' + ','.join(['(' + ','.join(valueSQL) + ')']*len(sqlrows))
cur.execute(insertSQL, sqlrows)
con.commit()

But it should be noted that if you can use copy_from(), you should use copy_from ;)


回答 6

几年来我一直在使用ant32的答案。但是我发现这在python 3中解决了一个错误,因为它mogrify返回了一个字节字符串。

显式转换为bytse字符串是使代码与python 3兼容的简单解决方案。

args_str = b','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup) 
cur.execute(b"INSERT INTO table VALUES " + args_str)

I’ve been using ant32’s answer above for several years. However I’ve found that is thorws an error in python 3 because mogrify returns a byte string.

Converting explicitly to bytse strings is a simple solution for making code python 3 compatible.

args_str = b','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup) 
cur.execute(b"INSERT INTO table VALUES " + args_str)

回答 7

另一种有效的好方法-将要插入的行作为1个参数传递,它是json对象的数组。

例如,您传递的参数:

[ {id: 18, score: 1}, { id: 19, score: 5} ]

它是数组,其中可以包含任意数量的对象。然后您的SQL看起来像:

INSERT INTO links (parent_id, child_id, score) 
SELECT 123, (r->>'id')::int, (r->>'score')::int 
FROM unnest($1::json[]) as r 

注意:您的postgress必须足够新,才能支持json

Another nice and efficient approach – is to pass rows for insertion as 1 argument, which is array of json objects.

E.g. you passing argument:

[ {id: 18, score: 1}, { id: 19, score: 5} ]

It is array, which may contain any amount of objects inside. Then your SQL looks like:

INSERT INTO links (parent_id, child_id, score) 
SELECT 123, (r->>'id')::int, (r->>'score')::int 
FROM unnest($1::json[]) as r 

Notice: Your postgress must be new enough, to support json


回答 8

上面(https://stackoverflow.com/a/30721460/11100064)的@ jopseph.sheedyhttps://stackoverflow.com/users/958118/joseph-sheedy)提供的cursor.copyfrom解决方案确实快如闪电。

但是,他给出的示例不适用于具有任意多个字段的记录,因此我花了一些时间才弄清楚如何正确使用它。

IteratorFile需要使用这样的制表符分隔的字段实例化(r是一个字典列表,其中每个字典是一条记录):

    f = IteratorFile("{0}\t{1}\t{2}\t{3}\t{4}".format(r["id"],
        r["type"],
        r["item"],
        r["month"],
        r["revenue"]) for r in records)

为了概括任意数量的字段,我们将首先创建一个带有正确数量的制表符和字段占位符的行字符串:"{}\t{}\t{}....\t{}"然后使用.format()来填充字段值 *list(r.values())) for r in records::

        line = "\t".join(["{}"] * len(records[0]))

        f = IteratorFile(line.format(*list(r.values())) for r in records)

完整的功能在这里

The cursor.copyfrom solution as provided by @jopseph.sheedy (https://stackoverflow.com/users/958118/joseph-sheedy) above (https://stackoverflow.com/a/30721460/11100064) is indeed lightning fast.

However, the example he gives are not generically usable for a record with any number of fields and it took me while to figure out how to use it correctly.

The IteratorFile needs to be instantiated with tab-separated fields like this (r is a list of dicts where each dict is a record):

    f = IteratorFile("{0}\t{1}\t{2}\t{3}\t{4}".format(r["id"],
        r["type"],
        r["item"],
        r["month"],
        r["revenue"]) for r in records)

To generalise for an arbitrary number of fields we will first create a line string with the correct amount of tabs and field placeholders : "{}\t{}\t{}....\t{}" and then use .format() to fill in the field values : *list(r.values())) for r in records:

        line = "\t".join(["{}"] * len(records[0]))

        f = IteratorFile(line.format(*list(r.values())) for r in records)

complete function in gist here.


回答 9

如果您使用的是SQLAlchemy,则无需手工处理字符串,因为SQLAlchemy 支持VALUES为单个INSERT语句生成多行子句

rows = []
for i, name in enumerate(rawdata):
    row = {
        'id': i,
        'name': name,
        'valid': True,
    }
    rows.append(row)
if len(rows) > 0:  # INSERT fails if no rows
    insert_query = SQLAlchemyModelName.__table__.insert().values(rows)
    session.execute(insert_query)

If you’re using SQLAlchemy, you don’t need to mess with hand-crafting the string because SQLAlchemy supports generating a multi-row VALUES clause for a single INSERT statement:

rows = []
for i, name in enumerate(rawdata):
    row = {
        'id': i,
        'name': name,
        'valid': True,
    }
    rows.append(row)
if len(rows) > 0:  # INSERT fails if no rows
    insert_query = SQLAlchemyModelName.__table__.insert().values(rows)
    session.execute(insert_query)

回答 10

自发布此问题以来,execute_batch已添加到psycopg2。

它比execute_values慢,但使用起来更简单。

execute_batch has been added to psycopg2 since this question was posted.

It is slower than execute_values but simpler to use.


回答 11

executemany接受元组数组

https://www.postgresqltutorial.com/postgresql-python/insert/

    """ array of tuples """
    vendor_list = [(value1,)]

    """ insert multiple vendors into the vendors table  """
    sql = "INSERT INTO vendors(vendor_name) VALUES(%s)"
    conn = None
    try:
        # read database configuration
        params = config()
        # connect to the PostgreSQL database
        conn = psycopg2.connect(**params)
        # create a new cursor
        cur = conn.cursor()
        # execute the INSERT statement
        cur.executemany(sql,vendor_list)
        # commit the changes to the database
        conn.commit()
        # close communication with the database
        cur.close()
    except (Exception, psycopg2.DatabaseError) as error:
        print(error)
    finally:
        if conn is not None:
            conn.close()

executemany accept array of tuples

https://www.postgresqltutorial.com/postgresql-python/insert/

    """ array of tuples """
    vendor_list = [(value1,)]

    """ insert multiple vendors into the vendors table  """
    sql = "INSERT INTO vendors(vendor_name) VALUES(%s)"
    conn = None
    try:
        # read database configuration
        params = config()
        # connect to the PostgreSQL database
        conn = psycopg2.connect(**params)
        # create a new cursor
        cur = conn.cursor()
        # execute the INSERT statement
        cur.executemany(sql,vendor_list)
        # commit the changes to the database
        conn.commit()
        # close communication with the database
        cur.close()
    except (Exception, psycopg2.DatabaseError) as error:
        print(error)
    finally:
        if conn is not None:
            conn.close()

回答 12

如果要在一个插入状态表中插入多行(假设您未使用ORM),那么到目前为止,对我来说最简单的方法是使用词典列表。这是一个例子:

 t = [{'id':1, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 6},
      {'id':2, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 7},
      {'id':3, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 8}]

conn.execute("insert into campaign_dates
             (id, start_date, end_date, campaignid) 
              values (%(id)s, %(start_date)s, %(end_date)s, %(campaignid)s);",
             t)

如您所见,将仅执行一个查询:

INFO sqlalchemy.engine.base.Engine insert into campaign_dates (id, start_date, end_date, campaignid) values (%(id)s, %(start_date)s, %(end_date)s, %(campaignid)s);
INFO sqlalchemy.engine.base.Engine [{'campaignid': 6, 'id': 1, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}, {'campaignid': 7, 'id': 2, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}, {'campaignid': 8, 'id': 3, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}]
INFO sqlalchemy.engine.base.Engine COMMIT

If you want to insert multiple rows within one insert statemens (assuming you are not using ORM) the easiest way so far for me would be to use list of dictionaries. Here is an example:

 t = [{'id':1, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 6},
      {'id':2, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 7},
      {'id':3, 'start_date': '2015-07-19 00:00:00', 'end_date': '2015-07-20 00:00:00', 'campaignid': 8}]

conn.execute("insert into campaign_dates
             (id, start_date, end_date, campaignid) 
              values (%(id)s, %(start_date)s, %(end_date)s, %(campaignid)s);",
             t)

As you can see only one query will be executed:

INFO sqlalchemy.engine.base.Engine insert into campaign_dates (id, start_date, end_date, campaignid) values (%(id)s, %(start_date)s, %(end_date)s, %(campaignid)s);
INFO sqlalchemy.engine.base.Engine [{'campaignid': 6, 'id': 1, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}, {'campaignid': 7, 'id': 2, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}, {'campaignid': 8, 'id': 3, 'end_date': '2015-07-20 00:00:00', 'start_date': '2015-07-19 00:00:00'}]
INFO sqlalchemy.engine.base.Engine COMMIT

回答 13

使用aiopg-下面的代码片段效果很好

    # items = [10, 11, 12, 13]
    # group = 1
    tup = [(gid, pid) for pid in items]
    args_str = ",".join([str(s) for s in tup])
    # insert into group values (1, 10), (1, 11), (1, 12), (1, 13)
    yield from cur.execute("INSERT INTO group VALUES " + args_str)

Using aiopg – The snippet below works perfectly fine

    # items = [10, 11, 12, 13]
    # group = 1
    tup = [(gid, pid) for pid in items]
    args_str = ",".join([str(s) for s in tup])
    # insert into group values (1, 10), (1, 11), (1, 12), (1, 13)
    yield from cur.execute("INSERT INTO group VALUES " + args_str)

回答 14

最终,在SQLalchemy1.2版本中,此新实现被添加为在使用use_batch_mode = True初始化引擎时使用psycopg2.extras.execute_batch()而不是executemany,例如:

engine = create_engine(
    "postgresql+psycopg2://scott:tiger@host/dbname",
    use_batch_mode=True)

http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#change-4109

然后,将不得不使用SQLalchmey的人不会费心尝试sqla和psycopg2的不同组合并直接将SQL一起使用。

Finally in SQLalchemy1.2 version, this new implementation is added to use psycopg2.extras.execute_batch() instead of executemany when you initialize your engine with use_batch_mode=True like:

engine = create_engine(
    "postgresql+psycopg2://scott:tiger@host/dbname",
    use_batch_mode=True)

http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#change-4109

Then someone would have to use SQLalchmey won’t bother to try different combinations of sqla and psycopg2 and direct SQL together..


django测试应用程序错误-创建测试数据库时出错:创建数据库的权限被拒绝

问题:django测试应用程序错误-创建测试数据库时出错:创建数据库的权限被拒绝

当我尝试使用命令测试任何应用程序时(当我尝试使用使用此命令的结构来部署myproject时,我注意到了它):

python manage.py test appname

我收到此错误:

Creating test database for alias 'default'...
Got an error creating the test database: permission denied to create database

Type 'yes' if you would like to try deleting the test database 'test_finance', or 'no' to cancel

syncdb命令似乎起作用。我在settings.py中的数据库设置:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
        'NAME': 'finance',                      # Or path to database file if using sqlite3.
        'USER': 'django',                      # Not used with sqlite3.
        'PASSWORD': 'mydb123',                  # Not used with sqlite3.
        'HOST': '127.0.0.1',                      # Set to empty string for localhost. Not used with sqlite3.
        'PORT': '',                      # Set to empty string for default. Not used with sqlite3.
    }
}

When I try to test any app with command (I noticed it when I tried to deploy myproject using fabric, which uses this command):

python manage.py test appname

I get this error:

Creating test database for alias 'default'...
Got an error creating the test database: permission denied to create database

Type 'yes' if you would like to try deleting the test database 'test_finance', or 'no' to cancel

syncdb command seems to work. My database settings in settings.py:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
        'NAME': 'finance',                      # Or path to database file if using sqlite3.
        'USER': 'django',                      # Not used with sqlite3.
        'PASSWORD': 'mydb123',                  # Not used with sqlite3.
        'HOST': '127.0.0.1',                      # Set to empty string for localhost. Not used with sqlite3.
        'PORT': '',                      # Set to empty string for default. Not used with sqlite3.
    }
}

回答 0

Django运行测试套件时,在您的情况下,它将创建一个新数据库test_finance。具有用户名的postgres用户django无权创建数据库,因此出现错误消息。

当您运行migrate或时syncdb,Django不会尝试创建finance数据库,因此不会出现任何错误。

您可以通过以超级用户身份在postgres shell中运行以下命令来向django用户添加createdb权限(此堆栈溢出答案的提示)。

=> ALTER USER django CREATEDB;

注意:ALTER USER <username> CREATEDB;命令中使用的用户名需要与Django设置文件中的数据库用户匹配。在这种情况下,原始张贴者将用户作为django上述答案。

When Django runs the test suite, it creates a new database, in your case test_finance. The postgres user with username django does not have permission to create a database, hence the error message.

When you run migrate or syncdb, Django does not try to create the finance database, so you don’t get any errors.

You can add the createdb permission to the django user by running the following command in the postgres shell as a superuser (hat tip to this stack overflow answer).

=> ALTER USER django CREATEDB;

Note: The username used in the ALTER USER <username> CREATEDB; command needs to match the database user in your Django settings files. In this case, the original poster, had the user as django the above answer.


回答 1

我找到了解决您问题的有趣方法。
实际上,对于MySQL,您可以授予不存在的数据库特权。
因此,您可以在设置中为测试数据库添加名称“ test_finance”:

    DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
        'NAME': 'finance',                      # Or path to database file if using sqlite3.
        'USER': 'django',                      # Not used with sqlite3.
        'PASSWORD': 'mydb123',                  # Not used with sqlite3.
        'HOST': '127.0.0.1',                      # Set to empty string for localhost. Not used with sqlite3.
        'PORT': '',                      # Set to empty string for default. Not used with sqlite3.
        'TEST': {
            'NAME': 'test_finance',
        },
    }
}

以root用户身份启动MySQL Shell:

mysql -u root -p

现在将所有特权授予该MySQL中不存在的数据库:

GRANT ALL PRIVILEGES ON test_finance.* TO 'django'@'localhost';

现在,Django将毫无问题地开始测试。

I have found interesting solution to your problem.
In fact for MySQL you can grant privileges for non-existing database.
So you can add name ‘test_finance’ for your test database in your settings:

    DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
        'NAME': 'finance',                      # Or path to database file if using sqlite3.
        'USER': 'django',                      # Not used with sqlite3.
        'PASSWORD': 'mydb123',                  # Not used with sqlite3.
        'HOST': '127.0.0.1',                      # Set to empty string for localhost. Not used with sqlite3.
        'PORT': '',                      # Set to empty string for default. Not used with sqlite3.
        'TEST': {
            'NAME': 'test_finance',
        },
    }
}

start MySQL shell as the root user:

mysql -u root -p

and now grant all privileges to this non-existing database in MySQL:

GRANT ALL PRIVILEGES ON test_finance.* TO 'django'@'localhost';

Now Django will start tests without any problems.


回答 2

对于Postgres,用户必须具有createdb权限。

ALTER ROLE miriam CREATEDB;

请参阅此文档:https : //docs.djangoproject.com/zh-CN/2.0/topics/testing/overview/#the-test-database

In the case of Postgres, the user must have createdb permission.

ALTER ROLE miriam CREATEDB;

See this documentation: https://docs.djangoproject.com/en/2.0/topics/testing/overview/#the-test-database


回答 3

如果数据库是mysql,那么这两个更改将完成任务。

1.打开mysite / mysite / settings.py

您的数据库设置应具有一个额外的TEST块,如projectname_test所示。

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'myproject',
        'USER': 'chandan',
        'PASSWORD': 'root',
        'HOST': 'localhost',
        'PORT': '3306',
        'TEST': {
            'NAME': 'myproject_test',
        },
    }
}

2.使用mysql命令提示符mysql工作台键入以下命令,将所有特权授予settings.py中指定的用户

GRANT ALL PRIVILEGES ON myproject_test.* TO 'chandan'@'localhost';

现在您可以运行 python manage.py test polls

If database is mysql then these two changes will get the things done.

1.Open mysite/mysite/settings.py

Your database settings should have an additional TEST block as shown with projectname_test.

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'myproject',
        'USER': 'chandan',
        'PASSWORD': 'root',
        'HOST': 'localhost',
        'PORT': '3306',
        'TEST': {
            'NAME': 'myproject_test',
        },
    }
}

2.Type the below command using mysql command prompt or mysql workbench to give all privilages to the user specified in settings.py

GRANT ALL PRIVILEGES ON myproject_test.* TO 'chandan'@'localhost';

Now you can run python manage.py test polls.


回答 4

如果您使用的docker-compose是对我有用的,则如下:

ALTER ROLE username CREATEDB;
GRANT ALL PRIVILEGES ON test_database_name.* TO 'username';

要么

ALTER ROLE username CREATEDB;
GRANT ALL PRIVILEGES ON *.* TO 'username'@'%';

我的设置如下所示:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'database_name',
        'USER': 'username',
        'PASSWORD': 'password',
        'HOST': 'db',
        'PORT': '3306',
    }
}

和我的docker-compose.yml样子如下:

version: '3'
services:
  web:
      build: .
      command: './wait_for_db_and_start_server.sh'
      env_file: env_web
      working_dir: /project_name
      links:
        - db
      volumes:
        - .:/volume_name
      ports:
        - "8000:8000"
      depends_on:
        - db
  db:
    image: mysql:5.7
    restart: always
    env_file: env_db
    working_dir: /db
    volumes:
      - ./Dump.sql:/db/Dump.sql
    ports:
      - "3306:3306"

If you are using docker-compose what worked for me was the following:

ALTER ROLE username CREATEDB;
GRANT ALL PRIVILEGES ON test_database_name.* TO 'username';

or

ALTER ROLE username CREATEDB;
GRANT ALL PRIVILEGES ON *.* TO 'username'@'%';

My settings looks like this:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'database_name',
        'USER': 'username',
        'PASSWORD': 'password',
        'HOST': 'db',
        'PORT': '3306',
    }
}

and my docker-compose.yml looks as follows:

version: '3'
services:
  web:
      build: .
      command: './wait_for_db_and_start_server.sh'
      env_file: env_web
      working_dir: /project_name
      links:
        - db
      volumes:
        - .:/volume_name
      ports:
        - "8000:8000"
      depends_on:
        - db
  db:
    image: mysql:5.7
    restart: always
    env_file: env_db
    working_dir: /db
    volumes:
      - ./Dump.sql:/db/Dump.sql
    ports:
      - "3306:3306"

回答 5

就我而言,GRANT PRIVILEGES解决方案不适用于Python 3.7.2Django 2.1.7MySQL 5.6.23 …我不知道为什么。

所以我决定使用SQLite作为TEST数据库…

DATABASES = {
    'default': {
        'NAME': 'productiondb',
        'ENGINE': 'mysql.connector.django',   # 'django.db.backends.mysql'
        'USER': '<user>',
        'PASSWORD': '<pass>',
        'HOST': 'localhost',
        'PORT': 3306,
        'OPTIONS': {
            'autocommit': True,
        },
        'TEST': {
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
        },
    }
}

此后,TESTS的汽车运行就没有麻烦了:

$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).

Destroying test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

Process finished with exit code 0

In my case, GRANT PRIVILEGES solutions didn’t work with Python 3.7.2, Django 2.1.7 and MySQL 5.6.23… I don’t know why.

So I decided to use SQLite as a TEST database…

DATABASES = {
    'default': {
        'NAME': 'productiondb',
        'ENGINE': 'mysql.connector.django',   # 'django.db.backends.mysql'
        'USER': '<user>',
        'PASSWORD': '<pass>',
        'HOST': 'localhost',
        'PORT': 3306,
        'OPTIONS': {
            'autocommit': True,
        },
        'TEST': {
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
        },
    }
}

After that, TESTS car run without troubles:

$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).

Destroying test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

Process finished with exit code 0

回答 6

哇,所以结合这里的所有答案和一点点调整,终于使我得到了一个适用于docker-compose,django和postgres的有效解决方案…

首先,noufal valapra给出的postgres命令不正确(或者可能不是最新的),它应该是:

ALTER USER docker WITH CREATEDB;

如果是docker-compose设置,它将进入init.sql文件,这就是我的样子:

CREATE USER docker;
ALTER USER docker WITH CREATEDB;
CREATE DATABASE djangodb;
GRANT ALL PRIVILEGES ON DATABASE djangodb TO docker;

然后用于postgres的Dockerfile如下所示:

FROM postgres:10.1-alpine
COPY init.sql /docker-entrypoint-initdb.d/

然后Django settings.py具有以下条目:

if 'RDS_DB_NAME' in os.environ:
    INTERNAL_DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': os.environ['RDS_DB_NAME'],
            'USER': os.environ['RDS_USERNAME'],
            'PASSWORD': os.environ['RDS_PASSWORD'],
            'HOST': os.environ['RDS_HOSTNAME'],
            'PORT': os.environ['RDS_PORT'],
        }
    }

和docker-compose看起来像这样:

版本:“ 3.6”

服务:

postgresdb:
  build:
    context: ./
    dockerfile: ./Dockerfile-postgresdb
  volumes:
    - postgresdata:/var/lib/postgresql/data/

django:
  build:
    context: ../
    dockerfile: ./docker/Dockerfile
  environment:
    - RDS_DB_NAME=djangodb
    - RDS_USERNAME=docker
    - RDS_PASSWORD=docker
    - RDS_HOSTNAME=postgresdb
    - RDS_PORT=5432

  stdin_open: true
  tty: true
  depends_on:
    - postgresdb

volumes:
    postgresdata:

Wow so combining all of the answers here with a little tweaking finally got me to a working solution for docker-compose, django, and postgres…

First the postgres command given by noufal valapra is not correct (or maybe just not current), it should be:

ALTER USER docker WITH CREATEDB;

In the case of a docker-compose setup, this will go in the init.sql file, this is what mine looks like:

CREATE USER docker;
ALTER USER docker WITH CREATEDB;
CREATE DATABASE djangodb;
GRANT ALL PRIVILEGES ON DATABASE djangodb TO docker;

Then the Dockerfile for postgres looks like this:

FROM postgres:10.1-alpine
COPY init.sql /docker-entrypoint-initdb.d/

Then the Django settings.py has this entry:

if 'RDS_DB_NAME' in os.environ:
    INTERNAL_DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': os.environ['RDS_DB_NAME'],
            'USER': os.environ['RDS_USERNAME'],
            'PASSWORD': os.environ['RDS_PASSWORD'],
            'HOST': os.environ['RDS_HOSTNAME'],
            'PORT': os.environ['RDS_PORT'],
        }
    }

and the docker-compose looks like this:

version: ‘3.6’

services:

postgresdb:
  build:
    context: ./
    dockerfile: ./Dockerfile-postgresdb
  volumes:
    - postgresdata:/var/lib/postgresql/data/

django:
  build:
    context: ../
    dockerfile: ./docker/Dockerfile
  environment:
    - RDS_DB_NAME=djangodb
    - RDS_USERNAME=docker
    - RDS_PASSWORD=docker
    - RDS_HOSTNAME=postgresdb
    - RDS_PORT=5432

  stdin_open: true
  tty: true
  depends_on:
    - postgresdb

volumes:
    postgresdata:

回答 7

也许您将测试置于暂停模式或作为后台工作。尝试fg在bash shell中使用命令。

Maybe you put your test in suspended mode or as a backgrounded job. Try with fg command in bash shell.


回答 8

超级用户帐户是保证测试顺利的最简单方法。因此,使django用户su 更简单的方法就是这样做ALTER django WITH SUPERUSER

有关更多信息https://www.postgresql.org/docs/current/sql-alteruser.html

A superuser account is the easiest way to guarantee smooth testing. so a simpler way of making the django user su is to do ALTER django WITH SUPERUSER .

for more information https://www.postgresql.org/docs/current/sql-alteruser.html


DatabaseError:当前事务中止,命令被忽略,直到事务块结束?

问题:DatabaseError:当前事务中止,命令被忽略,直到事务块结束?

消息有很多错误:

"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"

从python-psycopg更改为python-psycopg2作为Django项目的数据库引擎。

代码保持不变,只是不知道这些错误来自何处。

I got a lot of errors with the message :

"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"

after changed from python-psycopg to python-psycopg2 as Django project’s database engine.

The code remains the same, just don’t know where those errors are from.


回答 0

当查询产生错误并且您尝试运行另一个查询而不先回滚事务时,这就是postgres所做的。(您可能会认为这是一项安全功能,可以防止您破坏数据。)

要解决此问题,您将需要弄清楚错误查询在代码中的何处执行。在您的PostgreSQL服务器中使用log_statementlog_min_error_statement选项可能会有所帮助。

This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)

To fix this, you’ll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.


回答 1

要消除错误,请在修复代码后回滚上一个(错误的)事务

from django.db import transaction
transaction.rollback()

您可以使用try-except来防止发生错误:

from django.db import transaction, DatabaseError
try:
    a.save()
except DatabaseError:
    transaction.rollback()

参考:Django文档

To get rid of the error, roll back the last (erroneous) transaction after you’ve fixed your code:

from django.db import transaction
transaction.rollback()

You can use try-except to prevent the error from occurring:

from django.db import transaction, DatabaseError
try:
    a.save()
except DatabaseError:
    transaction.rollback()

Refer : Django documentation


回答 2

因此,我遇到了同样的问题。我在这里遇到的问题是我的数据库未正确同步。简单的问题似乎总是引起最大的忧虑。

要在终端的应用程序目录中同步django db,请输入:

$ python manage.py syncdb

编辑:请注意,如果您使用的是django-south,则运行’$ python manage.py migration’命令也可以解决此问题。

祝您编码愉快!

So, I ran into this same issue. The problem I was having here was that my database wasn’t properly synced. Simple problems always seem to cause the most angst…

To sync your django db, from within your app directory, within terminal, type:

$ python manage.py syncdb

Edit: Note that if you are using django-south, running the ‘$ python manage.py migrate’ command may also resolve this issue.

Happy coding!


回答 3

在Flask中,您只需要编写:

curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()

PS文档在这里https://www.postgresql.org/docs/9.4/static/sql-rollback.html

In Flask you just need to write:

curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()

P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html


回答 4

以我的经验,这些错误是通过以下方式发生的:

try:
    code_that_executes_bad_query()
    # transaction on DB is now bad
except:
    pass

# transaction on db is still bad
code_that_executes_working_query() # raises transaction error

第二个查询没有任何问题,但是由于发现了真正的错误,因此第二个查询引发了错误(信息少得多)。

编辑:仅当except子句捕获IntegrityError(或任何其他低级数据库异常)时,才会发生这种情况;如果捕获到DoesNotExist此类错误,则不会出现此错误,因为DoesNotExist这不会破坏事务。

这里的类是不要尝试/exceptions/通过。

In my experience, these errors happen this way:

try:
    code_that_executes_bad_query()
    # transaction on DB is now bad
except:
    pass

# transaction on db is still bad
code_that_executes_working_query() # raises transaction error

There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.

edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.

The lesson here is don’t do try/except/pass.


回答 5

我认为使用PostgreSQL时,priestc提到的模式更可能是此问题的常见原因。

但是,我认为该模式有有效的用途,我不认为这个问题应该成为始终避免它的原因。例如:

try:
    profile = user.get_profile()
except ObjectDoesNotExist:
    profile = make_default_profile_for_user(user)

do_something_with_profile(profile)

如果您确实对这种模式感到满意,但是想避免到处都是显式的事务处理代码,那么您可能希望考虑启用自动提交模式(PostgreSQL 8.2+):https ://docs.djangoproject.com/en/ dev / ref / databases /#autocommit-mode

DATABASES['default'] = {
    #.. you usual options...
    'OPTIONS': {
        'autocommit': True,
    }
}

我不确定是否有重要的性能考虑因素(或任何其他类型的考虑因素)。

I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.

However I feel there are valid uses for the pattern and I don’t think this issue should be a reason to always avoid it. For example:

try:
    profile = user.get_profile()
except ObjectDoesNotExist:
    profile = make_default_profile_for_user(user)

do_something_with_profile(profile)

If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode

DATABASES['default'] = {
    #.. you usual options...
    'OPTIONS': {
        'autocommit': True,
    }
}

I am unsure if there are important performance considerations (or of any other type).


回答 6

如果您在交互式外壳程序中得到此消息并需要快速修复,请执行以下操作:

from django.db import connection
connection._rollback()

最初在此答案中看到

If you get this while in interactive shell and need a quick fix, do this:

from django.db import connection
connection._rollback()

originally seen in this answer


回答 7

postgres终端上运行有故障的事务时,我遇到了类似的行为。此后什么都没有发生,因为database处于的状态error。但是,如果可以避免的话,只是快速解决方法rollback transaction。以下对我有用:

COMMIT;

I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:

COMMIT;


回答 8

我有silimar问题。解决的办法是迁移数据库(manage.py syncdb或者manage.py schemamigration --auto <table name>如果您使用南方)。

I’ve got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).


回答 9

只是使用回滚

范例程式码

try:
    cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
    cur.execute("rollback")
    cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")

just use rollback

Example code

try:
    cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
    cur.execute("rollback")
    cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")

回答 10

我也刚遇到这个错误,但是它掩盖了另一个更相关的错误消息,其中代码试图将125个字符的字符串存储在100个字符的列中:

DatabaseError: value too long for type character varying(100)

我必须调试代码才能显示以上消息,否则它将显示

DatabaseError: current transaction is aborted

I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:

DatabaseError: value too long for type character varying(100)

I had to debug through the code for the above message to show up, otherwise it displays

DatabaseError: current transaction is aborted

回答 11

作为对@priestc和@Sebastian的回应,如果您做这样的事情怎么办?

try:
    conn.commit()
except:
    pass

cursor.execute( sql )
try: 
    return cursor.fetchall()
except: 
    conn.commit()
    return None

我只是尝试了这段代码,它似乎可以正常工作,无需关心任何可能的错误就可以静默失败,并且在查询良好时可以正常工作。

In response to @priestc and @Sebastian, what if you do something like this?

try:
    conn.commit()
except:
    pass

cursor.execute( sql )
try: 
    return cursor.fetchall()
except: 
    conn.commit()
    return None

I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.


回答 12

我相信@AnujGupta的答案是正确的。但是,回滚本身会引发异常,您应该捕获并处理该异常:

from django.db import transaction, DatabaseError
try:
    a.save()
except DatabaseError:
    try:
        transaction.rollback()
    except transaction.TransactionManagementError:
        # Log or handle otherwise

如果发现要在各个save()位置重写此代码,则可以提取方法:

import traceback
def try_rolling_back():
    try:
        transaction.rollback()
        log.warning('rolled back')  # example handling
    except transaction.TransactionManagementError:
        log.exception(traceback.format_exc())  # example handling

最后,您可以使用装饰器装饰它,该装饰器保护使用的方法save()

from functools import wraps
def try_rolling_back_on_exception(fn):
    @wraps(fn)
    def wrapped(*args, **kwargs):
        try:
            return fn(*args, **kwargs)
        except:
            traceback.print_exc()
            try_rolling_back()
    return wrapped

@try_rolling_back_on_exception
def some_saving_method():
    # ...
    model.save()
    # ...

即使您实现了上面的装饰器,try_rolling_back()在需要进行特殊处理而通用装饰器处理还不够的情况下,如果需要手动使用提取方法,将其保留为提取方法仍然很方便。

I believe @AnujGupta’s answer is correct. However the rollback can itself raise an exception which you should catch and handle:

from django.db import transaction, DatabaseError
try:
    a.save()
except DatabaseError:
    try:
        transaction.rollback()
    except transaction.TransactionManagementError:
        # Log or handle otherwise

If you find you’re rewriting this code in various save() locations, you can extract-method:

import traceback
def try_rolling_back():
    try:
        transaction.rollback()
        log.warning('rolled back')  # example handling
    except transaction.TransactionManagementError:
        log.exception(traceback.format_exc())  # example handling

Finally, you can prettify it using a decorator that protects methods which use save():

from functools import wraps
def try_rolling_back_on_exception(fn):
    @wraps(fn)
    def wrapped(*args, **kwargs):
        try:
            return fn(*args, **kwargs)
        except:
            traceback.print_exc()
            try_rolling_back()
    return wrapped

@try_rolling_back_on_exception
def some_saving_method():
    # ...
    model.save()
    # ...

Even if you implement the decorator above, it’s still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn’t enough.


回答 13

这对我来说是非常奇怪的行为。我很惊讶没有人想到保存点。在我的代码中,查询失败是预期的行为:

from django.db import transaction
@transaction.commit_on_success
def update():
    skipped = 0
    for old_model in OldModel.objects.all():
        try:
            Model.objects.create(
                group_id=old_model.group_uuid,
                file_id=old_model.file_uuid,
            )
        except IntegrityError:
            skipped += 1
    return skipped

我已经以这种方式更改代码以使用保存点:

from django.db import transaction
@transaction.commit_on_success
def update():
    skipped = 0
    sid = transaction.savepoint()
    for old_model in OldModel.objects.all():
        try:
            Model.objects.create(
                group_id=old_model.group_uuid,
                file_id=old_model.file_uuid,
            )
        except IntegrityError:
            skipped += 1
            transaction.savepoint_rollback(sid)
        else:
            transaction.savepoint_commit(sid)
    return skipped

This is very strange behavior for me. I’m surprised that no one thought of savepoints. In my code failing query was expected behavior:

from django.db import transaction
@transaction.commit_on_success
def update():
    skipped = 0
    for old_model in OldModel.objects.all():
        try:
            Model.objects.create(
                group_id=old_model.group_uuid,
                file_id=old_model.file_uuid,
            )
        except IntegrityError:
            skipped += 1
    return skipped

I have changed code this way to use savepoints:

from django.db import transaction
@transaction.commit_on_success
def update():
    skipped = 0
    sid = transaction.savepoint()
    for old_model in OldModel.objects.all():
        try:
            Model.objects.create(
                group_id=old_model.group_uuid,
                file_id=old_model.file_uuid,
            )
        except IntegrityError:
            skipped += 1
            transaction.savepoint_rollback(sid)
        else:
            transaction.savepoint_commit(sid)
    return skipped

回答 14

在Flask Shell中,我需要做的只是session.rollback()克服这个问题。

In Flask shell, all I needed to do was a session.rollback() to get past this.


回答 15

我已经遇到了这个问题,由于错误交易尚未正确结束,因此出现错误,我在这里找到了postgresql_transactionsTransaction Control命令

交易控制

以下命令用于控制交易

BEGIN TRANSACTION  To start a transaction.

COMMIT  To save the changes, alternatively you can use END TRANSACTION command.

ROLLBACK  To rollback the changes.

所以我使用END TRANSACTION来结束错误TRANSACTION,如下代码:

    for key_of_attribute, command in sql_command.items():
        cursor = connection.cursor()
        g_logger.info("execute command :%s" % (command))
        try:
            cursor.execute(command)
            rows = cursor.fetchall()
            g_logger.info("the command:%s result is :%s" % (command, rows))
            result_list[key_of_attribute] = rows
            g_logger.info("result_list is :%s" % (result_list))
        except Exception as e:
            cursor.execute('END TRANSACTION;')
            g_logger.info("error command :%s and error is :%s" % (command, e))
    return result_list

I have met this issue , the error comes out since the error transactions hasn’t been ended rightly, I found the postgresql_transactions of Transaction Control command here

Transaction Control

The following commands are used to control transactions

BEGIN TRANSACTION − To start a transaction.

COMMIT − To save the changes, alternatively you can use END TRANSACTION command.

ROLLBACK − To rollback the changes.

so i use the END TRANSACTION to end the error TRANSACTION, code like this:

    for key_of_attribute, command in sql_command.items():
        cursor = connection.cursor()
        g_logger.info("execute command :%s" % (command))
        try:
            cursor.execute(command)
            rows = cursor.fetchall()
            g_logger.info("the command:%s result is :%s" % (command, rows))
            result_list[key_of_attribute] = rows
            g_logger.info("result_list is :%s" % (result_list))
        except Exception as e:
            cursor.execute('END TRANSACTION;')
            g_logger.info("error command :%s and error is :%s" % (command, e))
    return result_list

回答 16

您可以通过“ set_isolation_level(0)”禁用交易

you could disable transaction via “set_isolation_level(0)”


您需要安装postgresql-server-dev-XY来构建服务器端扩展,或者安装libpq-dev来构建客户端应用程序

问题:您需要安装postgresql-server-dev-XY来构建服务器端扩展,或者安装libpq-dev来构建客户端应用程序

我正在使用virtualenv处理Django项目,并将其连接到本地postgres数据库。当我运行项目时说,

ImportError: No module named psycopg2.extensions

然后我用这个命令来安装

pip install psycopg2

然后在安装过程中会出现以下错误。

Downloading/unpacking psycopg2==2.4.4
  Downloading psycopg2-2.4.4.tar.gz (648kB): 648kB downloaded
  Running setup.py (path:/home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2/setup.py) egg_info for package psycopg2

    Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.

    Complete output from command python setup.py egg_info:
    running egg_info

creating pip-egg-info/psycopg2.egg-info

writing pip-egg-info/psycopg2.egg-info/PKG-INFO

writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt

writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt

writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'

warning: manifest_maker: standard file '-c' not found



Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.



----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2
Storing debug log for failure in /home/muhammadtaqi/.pip/pip.log

I am working on Django project with virtualenv and connect it to local postgres database. when i run the project is says,

ImportError: No module named psycopg2.extensions

then i used this command to install

pip install psycopg2

then during the installation it gives following error.

Downloading/unpacking psycopg2==2.4.4
  Downloading psycopg2-2.4.4.tar.gz (648kB): 648kB downloaded
  Running setup.py (path:/home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2/setup.py) egg_info for package psycopg2

    Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.

    Complete output from command python setup.py egg_info:
    running egg_info

creating pip-egg-info/psycopg2.egg-info

writing pip-egg-info/psycopg2.egg-info/PKG-INFO

writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt

writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt

writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'

warning: manifest_maker: standard file '-c' not found



Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.



----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2
Storing debug log for failure in /home/muhammadtaqi/.pip/pip.log

回答 0

使用以下命令,将解决错误:

sudo apt-get install postgresql

然后开火:

sudo apt-get install python-psycopg2

最后:

sudo apt-get install libpq-dev

Use these following commands, this will solve the error:

sudo apt-get install postgresql

then fire:

sudo apt-get install python-psycopg2

and last:

sudo apt-get install libpq-dev

回答 1

我只是从终端以root身份运行此命令,问题就解决了,

sudo apt-get install -y postgis postgresql-9.3-postgis-2.1
pip install psycopg2

要么

sudo apt-get install libpq-dev python-dev
pip install psycopg2

I just run this command as a root from terminal and problem is solved,

sudo apt-get install -y postgis postgresql-9.3-postgis-2.1
pip install psycopg2

or

sudo apt-get install libpq-dev python-dev
pip install psycopg2

回答 2

只需安装libpq-dev

$ sudo apt-get install libpq-dev

Just install libpq-dev

$ sudo apt-get install libpq-dev

回答 3

对我来说,这个简单的命令解决了这个问题:

sudo apt-get install postgresql postgresql-contrib libpq-dev python-dev

然后我可以做:

 pip install psycopg2

For me this simple command solved the problem:

sudo apt-get install postgresql postgresql-contrib libpq-dev python-dev

Then I can do:

 pip install psycopg2

回答 4

对于Python 3,我做到了:

sudo apt install python3-dev postgresql postgresql-contrib python3-psycopg2 libpq-dev

然后我能够做到:

pip3 install psycopg2

For Python 3, I did:

sudo apt install python3-dev postgresql postgresql-contrib python3-psycopg2 libpq-dev

and then I was able to do:

pip3 install psycopg2

回答 5

他们更改了psycopg2的包装。安装二进制版本为我解决了此问题。如果您想自己编译二进制文件,以上答案仍然有效。

参见http://initd.org/psycopg/docs/news.html#what-s-new-in-psycopg-2-8

默认情况下不再安装二进制软件包。必须明确使用“ psycopg2-binary”软件包。

还有http://initd.org/psycopg/docs/install.html#binary-install-from-pypi

因此,如果您不需要编译自己的二进制文件,请使用:

pip install psycopg2-binary

They changed the packaging for psycopg2. Installing the binary version fixed this issue for me. The above answers still hold up if you want to compile the binary yourself.

See http://initd.org/psycopg/docs/news.html#what-s-new-in-psycopg-2-8.

Binary packages no longer installed by default. The ‘psycopg2-binary’ package must be used explicitly.

And http://initd.org/psycopg/docs/install.html#binary-install-from-pypi

So if you don’t need to compile your own binary, use:

pip install psycopg2-binary

回答 6

您必须设置postgresql-server-dev-XY,其中XY为您的服务器版本,它将在模块上安装libpq-dev和其他服务器变量,以进行服务器端开发。就我而言

apt-get install postgresql-server-dev-9.5

读取软件包列表…完成构建依赖关系树读取状态信息…完成以下软件包已自动安装,不再需要:libmysqlclient18 mysql-common使用’apt-get autoremove’删除它们。将安装以下额外的软件包:
libpq-dev建议的软件包:postgresql-doc-10将安装以下新软件包:libpq-dev postgresql-server-dev-9.5

就你而言

sudo apt-get install postgresql-server-dev-X.Y
sudo apt-get install python-psycopg2

You must setup postgresql-server-dev-X.Y, where X.Y. your’s servers version, and it will install libpq-dev and other servers variables at modules for server side developing. In my case it was

apt-get install postgresql-server-dev-9.5

Reading package lists… Done Building dependency tree Reading state information… Done The following packages were automatically installed and are no longer required: libmysqlclient18 mysql-common Use ‘apt-get autoremove’ to remove them. The following extra packages will be installed:
libpq-dev Suggested packages: postgresql-doc-10 The following NEW packages will be installed: libpq-dev postgresql-server-dev-9.5

In your’s case

sudo apt-get install postgresql-server-dev-X.Y
sudo apt-get install python-psycopg2

回答 7

我在Ubuntu 18.04上使用了虚拟环境,并且由于我只想将其安装为客户端,所以我只需要这样做:

sudo apt install libpq-dev
pip install psycopg2

并安装没有问题。当然,您可以像其他答案一样使用二进制文件,但是我更喜欢这种解决方案,因为它是在requirements.txt文件中声明的。

I was using a virtual environment on Ubuntu 18.04, and since I only wanted to install it as a client, I only had to do:

sudo apt install libpq-dev
pip install psycopg2

And installed without problems. Of course, you can use the binary as other answers said, but I preferred this solution since it was stated in a requirements.txt file.