The End Of Life date (EOL, sunset date) for Python 2.7 has been moved
five years into the future, to 2020. This decision was made to
clarify the status of Python 2.7 and relieve worries for those users
who cannot yet migrate to Python 3. See also PEP 466.
There are a lot of comments here from people who aren’t on the python-dev list and don’t really understand what this diff actually means.
The core developers are not required to maintain 2.7 post-2015, and most of them won’t be involved in it. That part hasn’t changed.
What is happening is that Red Hat is preparing to cut a RHEL 7 release, which AFAIK depending on how much you pay them they support for 13 years. So they will need to figure out how to support 2.7 themselves at least through 2027.
Here is where I am reading between the lines. RH are well within their right to fork Python and keep their maintenance patches to themselves and their customers (Python’s not copyleft). But, they are nice guys and so maybe they are willing to upstream their changes at least for awhile if there is still a Python project willing to accept them. Again, this is my speculation based on the ML discussion, not what RH has actually said they will do.
An analogy can be made to Rails LTS, a commercial fork of Rails 2.x that patio11 was involved in [0]. Inevitably somebody is going to step in to support 2.7, and so let’s see what we can do to avoid a situation where the only way to keep running 2.7 is to subscribe to RHEL.
Meanwhile, there are some large companies that use 2.7 extensively on Windows (e.g. Enthought, Anaconda) and the thinking goes that somebody can probably be found to produce a Windows installer once in awhile, assuming that Python.org will still host a download.
So really what is happening here is not very exciting. The core committers aren’t doing anything different than leaving the project as originally planned. What is happening is that they will leave the lights on in the source control repository and on the FTP server, so as to capture the free labor from people at large companies who have an interest in continuing to support 2.7.
The alternative is that RH and other vendors create proprietary and expensive forks of Python 2.7. That may end up happening anyway, but it will take longer for your employer to notice you should stop contributing your patches back if binaries still appear on python.org and you don’t have to ask IT to set up SCM and a bug tracker, etc.
This article says: “When 2.7 is released, the 2.x line will move into five years of a bug fix-only mode.”
So, as far as I see, Python 2.7 was the last 2.x feature-adding release, and though found bugs are going to be fixed (for some time), new features only go to 3.x releases.
The Python Developer’s Guide lists the “Status of Python branches” from version 2.6 up to the current version, including their current support status with End-of-life dates.
Python 2.7 wil be around forever. There is too much old code that uses it that no one wants to rewrite. There is already a fork called Tauthon, but we may see others if this pointless deadline gets real.
In Python 3, the dict.keys() method returns a dictionary view object, which acts as a set. Iterating over the dictionary directly also yields keys, so turning a dictionary into a list results in a list of all the keys:
Not a full answer but perhaps a useful hint. If it is really the first item you want*, then
next(iter(q))
is much faster than
list(q)[0]
for large dicts, since the whole thing doesn’t have to be stored in memory.
For 10.000.000 items I found it to be almost 40.000 times faster.
*The first item in case of a dict being just a pseudo-random item before Python 3.6 (after that it’s ordered in the standard implementation, although it’s not advised to rely on it).
In many cases, this may be an XY Problem. Why are you indexing your dictionary keys by position? Do you really need to? Until recently, dictionaries were not even ordered in Python, so accessing the first element was arbitrary.
I just translated some Python 2 code to Python 3:
keys = d.keys()
for (i, res) in enumerate(some_list):
k = keys[i]
# ...
which is not pretty, but not very bad either. At first, I was about to replace it by the monstrous
k = next(itertools.islice(iter(keys), i, None))
before I realised this is all much better written as
for (k, res) in zip(d.keys(), some_list):
which works just fine.
I believe that in many other cases, indexing dictionary keys by position can be avoided. Although dictionaries are ordered in Python 3.7, relying on that is not pretty. The code above only works because the contents of some_list had been recently produced from the contents of d.
Have a hard look at your code if you really need to access a disk_keys element by index. Perhaps you don’t need to.
回答 5
试试这个
keys =[next(iter(x.keys()))for x in test]print(list(keys))
I want to check my environment for the existence of a variable, say "FOO", in Python. For this purpose, I am using the os standard library. After reading the library’s documentation, I have figured out 2 ways to achieve my goal:
Method 1:
if "FOO" in os.environ:
pass
Method 2:
if os.getenv("FOO") is not None:
pass
I would like to know which method, if either, is a good/preferred conditional and why.
Use the first; it directly tries to check if something is defined in environ. Though the second form works equally well, it’s lacking semantically since you get a value back if it exists and only use it for a comparison.
You’re trying to see if something is present inenviron, why would you get just to compare it and then toss it away?
That’s exactly what getenv does:
Get an environment variable, return None if it doesn’t exist. The
optional second argument can specify an alternate default.
(this also means your check could just be if getenv("FOO"))
you don’t want to get it, you want to check for it’s existence.
Either way, getenv is just a wrapper around environ.get but you don’t see people checking for membership in mappings with:
from os import environ
if environ.get('Foo') is not None:
To summarize, use:
if "FOO" in os.environ:
pass
if you just want to check for existence, while, use getenv("FOO") if you actually want to do something with the value you might get.
There is a case for either solution, depending on what you want to do conditional on the existence of the environment variable.
Case 1
When you want to take different actions purely based on the existence of the environment variable, without caring for its value, the first solution is the best practice. It succinctly describes what you test for: is ‘FOO’ in the list of environment variables.
if 'KITTEN_ALLERGY' in os.environ:
buy_puppy()
else:
buy_kitten()
Case 2
When you want to set a default value if the value is not defined in the environment variables the second solution is actually useful, though not in the form you wrote it:
server = os.getenv('MY_CAT_STREAMS', 'youtube.com')
or perhaps
server = os.environ.get('MY_CAT_STREAMS', 'youtube.com')
Note that if you have several options for your application you might want to look into ChainMap, which allows to merge multiple dicts based on keys. There is an example of this in the ChainMap documentation:
import os
MANDATORY_ENV_VARS =["FOO","BAR"]for var in MANDATORY_ENV_VARS:if var notin os.environ:raiseEnvironmentError("Failed because {} is not set.".format(var))
In case you want to check if multiple env variables are not set, you can do the following:
import os
MANDATORY_ENV_VARS = ["FOO", "BAR"]
for var in MANDATORY_ENV_VARS:
if var not in os.environ:
raise EnvironmentError("Failed because {} is not set.".format(var))
My comment might not be relevant to the tags given. However, I was lead to this page from my search.
I was looking for similar check in R and I came up the following with the help of @hugovdbeg post. I hope it would be helpful for someone who is looking for similar solution in R
File"H:\fixers - 3.4\addressfixer - 3.4\trunk\lib\address\address_generic.py", line 382,in read_ref_files
d = pickle.load(open(mshelffile,'rb'))UnicodeDecodeError:'ascii' codec can't decode byte 0xe2 in position 1: ordinal
not in range(128)
I’m wondering if there is a way to load an object that was pickled in Python 2.4, with Python 3.4.
I’ve been running 2to3 on a large amount of company legacy code to get it up to date.
Having done this, when running the file I get the following error:
File "H:\fixers - 3.4\addressfixer - 3.4\trunk\lib\address\address_generic.py"
, line 382, in read_ref_files
d = pickle.load(open(mshelffile, 'rb'))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 1: ordinal
not in range(128)
Looking at the pickled object in contention, it’s a dict in a dict, containing keys and values of type str.
So my question is: Is there a way to load an object, originally pickled in python 2.4, with python 3.4?
You’ll have to tell pickle.load() how to convert Python bytestring data to Python 3 strings, or you can tell pickle to leave them as bytes.
The default is to try and decode all string data as ASCII, and that decoding fails. See the pickle.load() documentation:
Optional keyword arguments are fix_imports, encoding and errors, which are used to control compatibility support for pickle stream generated by Python 2. If fix_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. The encoding and errors tell pickle how to decode 8-bit string instances pickled by Python 2; these default to ‘ASCII’ and ‘strict’, respectively. The encoding can be ‘bytes’ to read these 8-bit string instances as bytes objects.
Setting the encoding to latin1 allows you to import the data directly:
with open(mshelffile, 'rb') as f:
d = pickle.load(f, encoding='latin1')
but you’ll need to verify that none of your strings are decoded using the wrong codec; Latin-1 works for any input as it maps the byte values 0-255 to the first 256 Unicode codepoints directly.
The alternative would be to load the data with encoding='bytes', and decode all bytes keys and values afterwards.
How can I make anaconda environment file which could be use on other computers?
I exported my anaconda python environment to YML using conda env export > environment.yml. The exported environment.yml contains this line prefix: /home/superdev/miniconda3/envs/juicyenv which maps to my anaconda’s location which will be different on other’s pcs.
I can’t find anything in the conda specs which allow you to export an environment file without the prefix: ... line. However, as Alex pointed out in the comments, conda doesn’t seem to care about the prefix line when creating an environment from file.
With that in mind, if you want the other user to have no knowledge of your default install path, you can remove the prefix line with grep before writing to environment.yml.
and the environment will get installed in their default conda environment path.
If you want to specify a different install path than the default for your system (not related to ‘prefix’ in the environment.yml), just use the -p flag followed by the required path.
Note that Conda recommends creating the environment.yml by hand, which is especially important if you are wanting to share your environment across platforms (Windows/Linux/Mac). In this case, you can just leave out the prefix line.
I find exporting the packages in string format only is more portable than exporting the whole conda environment. As the previous answer already suggested:
$ conda list -e > requirements.txt
However, this requirements.txt contains build numbers which are not portable between operating systems, e.g. between Mac and Ubuntu. In conda env export we have the option --no-builds but not with conda list -e, so we can remove the build number by issuing the following command:
$ sed -i -E "s/^(.*\=.*)(\=.*)/\1/" requirements.txt
OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I’ve done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I’d like to do like the cool kids do, and just do something like pip3 install pyserial. But it’s not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet.
回答 0
更新:Python3.4不再需要此功能。它会在库存安装中安装pip3。
我最终在python邮件列表上发布了相同的问题,并得到以下答案:
# download and install setuptools
curl -O https://bootstrap.pypa.io/ez_setup.py
python3 ez_setup.py
# download and install pip
curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
完美解决了我的问题。在为我自己添加以下内容之后:
cd /usr/local/bin
ln -s ../../../Library/Frameworks/Python.framework/Versions/3.3/bin/pip pip
The last step gives you some warnings and errors that you have to resolve. One of those will be to download and install the Mac OS X command-line tools.
then:
brew install python3
This gave me python3 and pip3 in my path.
pieter$ which pip3 python3
/usr/local/bin/pip3
/usr/local/bin/python3
If you have python2 and python3 both installed in your system, the pip upgrade will point to python2 by default. Hence, we must specify the version of python(python3) and use the below command:
python3 -m pip install --upgrade pip
This command will uninstall the previously installed pip and install the new version- upgrading your pip.
On Mac OS X Mojavepython stands for python of version 2.7 and python3 for python of version 3. The same is pip and pip3. So, to upgrade pip for python 3 do this:
OR if you don’t want to overwrite the python executable (safer, at least on some distros yum needs python to be 2.x, such as for RHEL6) – you can install python3.* as a concurrent instance to the system default with an altinstall:
$ make altinstall
Now if you want an alternative installation directory, you can pass --prefix to the configurecommand.
Example: for ‘installing’ Python in /opt/local, just add --prefix=/opt/local.
After the make install step: In order to use your new Python installation, it could be, that you still have to add the [prefix]/bin to the $PATH and [prefix]/lib to the $LD_LIBRARY_PATH (depending of the --prefix you passed)
# 1. Install the Software Collections tools:
yum install scl-utils
# 2. Download a package with repository for your system.# (See the Yum Repositories on external link. For RHEL/CentOS 6:)
wget https://www.softwarecollections.org/en/scls/rhscl/rh-python34/epel-6-x86_64/download/rhscl-rh-python34-epel-6-x86_64.noarch.rpm
# or for RHEL/CentOS 7
wget https://www.softwarecollections.org/en/scls/rhscl/rh-python34/epel-7-x86_64/download/rhscl-rh-python34-epel-7-x86_64.noarch.rpm
# 3. Install the repo package (on RHEL you will need to enable optional channel first):
yum install rhscl-rh-python34-*.noarch.rpm
# 4. Install the collection:
yum install rh-python34
# 5. Start using software collections:
scl enable rh-python34 bash
Follow these instructions to install Python 3.4 on RHEL 6/7 or CentOS 6/7:
# 1. Install the Software Collections tools:
yum install scl-utils
# 2. Download a package with repository for your system.
# (See the Yum Repositories on external link. For RHEL/CentOS 6:)
wget https://www.softwarecollections.org/en/scls/rhscl/rh-python34/epel-6-x86_64/download/rhscl-rh-python34-epel-6-x86_64.noarch.rpm
# or for RHEL/CentOS 7
wget https://www.softwarecollections.org/en/scls/rhscl/rh-python34/epel-7-x86_64/download/rhscl-rh-python34-epel-7-x86_64.noarch.rpm
# 3. Install the repo package (on RHEL you will need to enable optional channel first):
yum install rhscl-rh-python34-*.noarch.rpm
# 4. Install the collection:
yum install rh-python34
# 5. Start using software collections:
scl enable rh-python34 bash
I see all the answers as either asking to compile python3 from code or installing the binary RPM package. Here is another answer to enable EPEL (Extra Packages for Enterprise Linux) and then install python using yum. Steps for RHEL 7.5 (Maipo)
yum install wget –y
wget https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-XX.noarch.rpm # Verify actual RPM name by browsing dir over browser
rpm –ivh epel-*.rpm
yum install python36
Note that sudo is not needed for the last command. Now we can see that python 3 is the default for the current shell:
python --version
Python 3.5.1
Simply skip the last command if you’d rather have Python 2 as the default for the current shell.
Now let’s say that your Python 3 scripts give you an error like /usr/bin/env: python3: No such file or directory. That’s because the installation is usually done to an unusual path:
/opt/rh/rh-python35/root/bin/python3
The above would normally be a symlink. If you want python3 to be automatically added to the $PATH for all users on startup, one way to do this is adding a file like:
It should just work. One exception would be an auto-generated user like “jenkins” in a Jenkins server which doesn’t have a shell. In that case, manually adding the path to $PATH in scripts would be one way to go.
Finally, if you’re using sudo pip3 to install packages, but it tells you that pip3 cannot be found, it could be that you have a secure_path in /etc/sudoers. Checking with sudo visudo should confirm that. To temporarily use the standard PATH when running commands you can do, for example:
NOTE: There is a newer Python 3.6 by Software Collections, but I wouldn’t recommend it at this time, because I had major headaches trying to install Pycurl. For Python 3.5 that isn’t an issue because I just did sudo yum install sclo-python35-python-pycurl which worked out of the box.
If you are on RHEL and want a Red Hat supported Python, use Red Hat Software collections (RHSCL). The EPEL and IUS packages are not supported by Red Hat. Also many of the answers above point to the CentOS software collections. While you can install those, they aren’t the Red Hat supported packages for RHEL.
Also, the top voted answer gives bad advice – On RHEL you do not want to change /usr/bin/python, /usr/bin/python2 because you will likely break yum and other RHEL admin tools. Take a look at /bin/yum, it is a Python script that starts with #!/usr/bin/python. If you compile Python from source, do not do a make install as root. That will overwrite /usr/bin/python. If you break yum it can be difficult to restore your system.
To use a software collection you have to enable it:
scl enable rh-python36 bash
However if you want Python 3 permanently enabled, you can add the following to your ~/.bashrc and then log out and back in again. Now Python 3 is permanently in your path.
# Add RHSCL Python 3 to my login environment
source scl_source enable rh-python36
Note: once you do that, typing python now gives you Python 3.6 instead of Python 2.7.
See the above article for all of this and a lot more detail.
You can install miniconda (https://conda.io/miniconda.html). That’s a bit more than just python 3.7 but the installation is very straightforward and simple.
You’ll have to accept the license agreement and choose some options in interactive mode (accept the defaults).
I believe it can be also installed silently somehow.
Using Python 3’s function annotations, it is possible to specify the type of items contained within a homogeneous list (or other collection) for the purpose of type hinting in PyCharm and other IDEs?
A pseudo-python code example for a list of int:
def my_func(l:list<int>):
pass
I know it’s possible using Docstring…
def my_func(l):
"""
:type l: list[int]
"""
pass
… but I prefer the annotation style if it’s possible.
Answering my own question; the TLDR answer is NoYes.
Update 2
In September 2015, Python 3.5 was released with support for Type Hints and includes a new typing module. This allows for the specification of types contained within collections. As of November 2015, JetBrains PyCharm 5.0 fully supports Python 3.5 to include Type Hints as illustrated below.
With support from the BDFL, it’s almost certain now that python (probably 3.5) will provide a standardized syntax for type hints via function annotations.
As referenced in the PEP, there is an experimental type-checker (kind of like pylint, but for types) called mypy that already uses this standard, and doesn’t require any new syntax.
As of Python 3.9, builtin types are generic with respect to type annotations (see PEP 585). This allows to directly specify the type of elements:
def my_func(l: list[int]):
pass
Various tools may support this syntax earlier than Python 3.9. When annotations are not inspected at runtime, the syntax is valid using quoting or __future__.annotations.
# quoted
def my_func(l: 'list[int]'):
pass
# postponed evaluation of annotation
from __future__ import annotations
def my_func(l: list[int]):
pass
class A(object):def salutation(self, accusative):print"hello", accusative
# note this function is intentionally on the module, and not the class abovedef __getattr__(mod, name):return getattr(A(), name)if __name__ =="__main__":# i hope here to have my __getattr__ function above invoked, since# salutation does not exist in the current namespace
salutation("world")
这使:
matt@stanley:~/Desktop$ python getattrmod.py
Traceback(most recent call last):File"getattrmod.py", line 9,in<module>
salutation("world")NameError: name 'salutation'isnot defined
How can implement the equivalent of a __getattr__ on a class, on a module?
Example
When calling a function that does not exist in a module’s statically defined attributes, I wish to create an instance of a class in that module, and invoke the method on it with the same name as failed in the attribute lookup on the module.
class A(object):
def salutation(self, accusative):
print "hello", accusative
# note this function is intentionally on the module, and not the class above
def __getattr__(mod, name):
return getattr(A(), name)
if __name__ == "__main__":
# i hope here to have my __getattr__ function above invoked, since
# salutation does not exist in the current namespace
salutation("world")
Which gives:
matt@stanley:~/Desktop$ python getattrmod.py
Traceback (most recent call last):
File "getattrmod.py", line 9, in <module>
salutation("world")
NameError: name 'salutation' is not defined
Recently some historical features have made a comeback, the module __getattr__ among them, and so the existing hack (a module replacing itself with a class in sys.modules at import time) should be no longer necessary.
In Python 3.7+, you just use the one obvious way. To customize attribute access on a module, define a __getattr__ function at the module level which should accept one argument (name of attribute), and return the computed value or raise an AttributeError:
This will also allow hooks into “from” imports, i.e. you can return dynamically generated objects for statements such as from my_module import whatever.
On a related note, along with the module getattr you may also define a __dir__ function at module level to respond to dir(my_module). See PEP 562 for details.
回答 1
您在这里遇到两个基本问题:
__xxx__ 方法只在类上查找
TypeError: can't set attributes of built-in/extension type 'module'
There are two basic problems you are running into here:
__xxx__ methods are only looked up on the class
TypeError: can't set attributes of built-in/extension type 'module'
(1) means any solution would have to also keep track of which module was being examined, otherwise every module would then have the instance-substitution behavior; and (2) means that (1) isn’t even possible… at least not directly.
Fortunately, sys.modules is not picky about what goes there so a wrapper will work, but only for module access (i.e. import somemodule; somemodule.salutation('world'); for same-module access you pretty much have to yank the methods from the substitution class and add them to globals() eiher with a custom method on the class (I like using .export()) or with a generic function (such as those already listed as answers). One thing to keep in mind: if the wrapper is creating a new instance each time, and the globals solution is not, you end up with subtly different behavior. Oh, and you don’t get to use both at the same time — it’s one or the other.
There is actually a hack that is occasionally used and recommended: a
module can define a class with the desired functionality, and then at
the end, replace itself in sys.modules with an instance of that class
(or with the class, if you insist, but that’s generally less useful).
E.g.:
This works because the import machinery is actively enabling this
hack, and as its final step pulls the actual module out of
sys.modules, after loading it. (This is no accident. The hack was
proposed long ago and we decided we liked enough to support it in the
import machinery.)
So the established way to accomplish what you want is to create a single class in your module, and as the last act of the module replace sys.modules[__name__] with an instance of your class — and now you can play with __getattr__/__setattr__/__getattribute__ as needed.
Note 1: If you use this functionality then anything else in the module, such as globals, other functions, etc., will be lost when the sys.modules assignment is made — so make sure everything needed is inside the replacement class.
Note 2: To support from module import * you must have __all__ defined in the class; for example:
class A(object):
....
# The implicit global instance
a= A()
def salutation( *arg, **kw ):
a.salutation( *arg, **kw )
Why? So that the implicit global instance is visible.
For examples, look at the random module, which creates an implicit global instance to slightly simplify the use cases where you want a “simple” random number generator.
Similar to what @Håvard S proposed, in a case where I needed to implement some magic on a module (like __getattr__), I would define a new class that inherits from types.ModuleType and put that in sys.modules (probably replacing the module where my custom ModuleType was defined).
See the main __init__.py file of Werkzeug for a fairly robust implementation of this.
回答 5
这有点黑,但是…
import types
class A(object):def salutation(self, accusative):print"hello", accusative
def farewell(self, greeting, accusative):print greeting, accusative
defAddGlobalAttribute(classname, methodname):print"Adding "+ classname +"."+ methodname +"()"def genericFunction(*args):return globals()[classname]().__getattribute__(methodname)(*args)
globals()[methodname]= genericFunction
# set up the global namespace
x =0# X and Y are here to add them implicitly to globals, so
y =0# globals does not change as we iterate over it.
toAdd =[]def isCallableMethod(classname, methodname):
someclass = globals()[classname]()
something = someclass.__getattribute__(methodname)return callable(something)for x in globals():print"Looking at", x
if isinstance(globals()[x],(types.ClassType, type)):print"Found Class:", x
for y in dir(globals()[x]):if y.find("__")==-1:# hack to ignore default methodsif isCallableMethod(x,y):if y notin globals():# don't override existing global names
toAdd.append((x,y))for x in toAdd:AddGlobalAttribute(*x)if __name__ =="__main__":
salutation("world")
farewell("goodbye","world")
import types
class A(object):
def salutation(self, accusative):
print "hello", accusative
def farewell(self, greeting, accusative):
print greeting, accusative
def AddGlobalAttribute(classname, methodname):
print "Adding " + classname + "." + methodname + "()"
def genericFunction(*args):
return globals()[classname]().__getattribute__(methodname)(*args)
globals()[methodname] = genericFunction
# set up the global namespace
x = 0 # X and Y are here to add them implicitly to globals, so
y = 0 # globals does not change as we iterate over it.
toAdd = []
def isCallableMethod(classname, methodname):
someclass = globals()[classname]()
something = someclass.__getattribute__(methodname)
return callable(something)
for x in globals():
print "Looking at", x
if isinstance(globals()[x], (types.ClassType, type)):
print "Found Class:", x
for y in dir(globals()[x]):
if y.find("__") == -1: # hack to ignore default methods
if isCallableMethod(x,y):
if y not in globals(): # don't override existing global names
toAdd.append((x,y))
for x in toAdd:
AddGlobalAttribute(*x)
if __name__ == "__main__":
salutation("world")
farewell("goodbye", "world")
This works by iterating over the all the objects in the global namespace. If the item is a class, it iterates over the class attributes. If the attribute is callable it adds it to the global namespace as a function.
It ignore all attributes which contain “__”.
I wouldn’t use this in production code, but it should get you started.
Here’s my own humble contribution — a slight embellishment of @Håvard S’s highly rated answer, but a bit more explicit (so it might be acceptable to @S.Lott, even though probably not good enough for the OP):
Create your module file that has your classes. Import the module. Run getattr on the module you just imported. You can do a dynamic import using __import__ and pull the module from sys.modules.
>>> A =205>>> B =-117>>> t = A ^ B # precomputed toggle constant>>> x = A>>> x ^= t # toggle>>> x-117>>> x ^= t # toggle>>> x205>>> x ^= t # toggle>>> x-117
(此想法由Nick Coghlan提交,后来由@zxxc推广。)
使用字典的解决方案
如果值是可哈希的,则可以使用字典:
>>> A ='xyz'>>> B ='pdq'>>> d ={A:B, B:A}>>> x = A>>> x = d[x]# toggle>>> x'pdq'>>> x = d[x]# toggle>>> x'xyz'>>> x = d[x]# toggle>>> x'pdq'
>>> A =[1,2,3]>>> B =[4,5,6]>>> x = A>>> x = B if x == A else A>>> x[4,5,6]>>> x = B if x == A else A>>> x[1,2,3]>>> x = B if x == A else A>>> x[4,5,6]
If the values are boolean, the fastest approach is to use the not operator:
>>> x = True
>>> x = not x # toggle
>>> x
False
>>> x = not x # toggle
>>> x
True
>>> x = not x # toggle
>>> x
False
Solution using subtraction
If the values are numerical, then subtraction from the total is a simple and fast way to toggle values:
>>> A = 5
>>> B = 3
>>> total = A + B
>>> x = A
>>> x = total - x # toggle
>>> x
3
>>> x = total - x # toggle
>>> x
5
>>> x = total - x # toggle
>>> x
3
The technique generalizes to any pair of integers. The xor-by-one step is replaced with a xor-by-precomputed-constant:
>>> A = 205
>>> B = -117
>>> t = A ^ B # precomputed toggle constant
>>> x = A
>>> x ^= t # toggle
>>> x
-117
>>> x ^= t # toggle
>>> x
205
>>> x ^= t # toggle
>>> x
-117
(This idea was submitted by Nick Coghlan and later generalized by @zxxc.)
Solution using a dictionary
If the values are hashable, you can use a dictionary:
>>> A = 'xyz'
>>> B = 'pdq'
>>> d = {A:B, B:A}
>>> x = A
>>> x = d[x] # toggle
>>> x
'pdq'
>>> x = d[x] # toggle
>>> x
'xyz'
>>> x = d[x] # toggle
>>> x
'pdq'
>>> A = [1,2,3]
>>> B = [4,5,6]
>>> x = A
>>> x = B if x == A else A
>>> x
[4, 5, 6]
>>> x = B if x == A else A
>>> x
[1, 2, 3]
>>> x = B if x == A else A
>>> x
[4, 5, 6]
Solution using itertools
If you have more than two values, the itertools.cycle() function provides a generic fast way to toggle between successive values:
Note that in Python 3 the next() method was changed to __next__(), so the first line would be now written as toggle = itertools.cycle(['red', 'green', 'blue']).__next__
If p is a boolean, this switches between true and false.
回答 2
这是另一种不直观的方法。优点是您可以循环多个值,而不仅仅是两个[0,1]
对于两个值(切换)
>>> x=[1,0]>>> toggle=x[toggle]
对于多个值(例如4)
>>> x=[1,2,3,0]>>> toggle=x[toggle]
我没想到这个解决方案也几乎是最快的
>>> stmt1="""
toggle=0
for i in xrange(0,100):
toggle = 1 if toggle == 0 else 0
""">>> stmt2="""
x=[1,0]
toggle=0
for i in xrange(0,100):
toggle=x[toggle]
""">>> t1=timeit.Timer(stmt=stmt1)>>> t2=timeit.Timer(stmt=stmt2)>>>print"%.2f usec/pass"%(1000000* t1.timeit(number=100000)/100000)7.07 usec/pass>>>print"%.2f usec/pass"%(1000000* t2.timeit(number=100000)/100000)6.19 usec/pass
stmt3="""
toggle = False
for i in xrange(0,100):
toggle = (not toggle) & 1
""">>> t3=timeit.Timer(stmt=stmt3)>>>print"%.2f usec/pass"%(1000000* t3.timeit(number=100000)/100000)9.84 usec/pass>>> stmt4="""
x=0
for i in xrange(0,100):
x=x-1
""">>> t4=timeit.Timer(stmt=stmt4)>>>print"%.2f usec/pass"%(1000000* t4.timeit(number=100000)/100000)6.32 usec/pass
The not operator negates your variable (converting it into a boolean if it isn’t already one). You can probably use 1 and 0 interchangeably with True and False, so just negate it:
toggle = not toggle
But if you are using two arbitrary values, use an inline if:
Surprisingly nobody mention good old division modulo 2:
In : x = (x + 1) % 2 ; x
Out: 1
In : x = (x + 1) % 2 ; x
Out: 0
In : x = (x + 1) % 2 ; x
Out: 1
In : x = (x + 1) % 2 ; x
Out: 0
Note that it is equivalent to x = x - 1, but the advantage of modulo technique is that the size of the group or length of the interval can be bigger then just 2 elements, thus giving you a similar to round-robin interleaving scheme to loop over.
Now just for 2, toggling can be a bit shorter (using bit-wise operator):
x = x ^ 1
回答 7
一种切换方式是使用多重分配
>>> a =5>>> b =3>>> t = a, b = b, a
>>> t[0]3>>> t = a, b = b, a
>>> t[0]5
The easiest way to toggle between 1 and 0 is to subtract from 1.
def toggle(value):
return 1 - value
回答 9
使用异常处理程序
>>>def toogle(x):...try:...return x/x-x/x
...exceptZeroDivisionError:...return1...>>> x=0>>> x=toogle(x)>>> x
1>>> x=toogle(x)>>> x
0>>> x=toogle(x)>>> x
1>>> x=toogle(x)>>> x
0
好吧,我是最糟糕的:
import math
import sys
d={1:0,0:1}
l=[1,0]def exception_approach(x):try:return x/x-x/x
exceptZeroDivisionError:return1def cosinus_approach(x):return abs( int( math.cos( x *0.5* math.pi )))def module_approach(x):return(x +1)%2def subs_approach(x):return x -1def if_approach(x):return0if x ==1else1def list_approach(x):global l
return l[x]def dict_approach(x):global d
return d[x]def xor_approach(x):return x^1def not_approach(x):
b=bool(x)
p=not b
return int(p)
funcs=[ exception_approach, cosinus_approach, dict_approach, module_approach, subs_approach, if_approach, list_approach, xor_approach, not_approach ]
f=funcs[int(sys.argv[1])]print"\n\n\n", f.func_name
x=0for _ in range(0,100000000):
x=f(x)
>>> toggle(1)Traceback(most recent call last):File"<stdin>", line 1,in<module>TypeError: descriptor 'conjugate' requires a 'complex' object but received a 'int'
对LHS和RHS进行更改:
>>> x +=1+2j>>> x
(3+5j)
…但是要小心操作RHS:
>>> z =1-1j>>> z +=2j>>> z
(1+1j)# whoops! toggled it!
>>> toggle(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: descriptor 'conjugate' requires a 'complex' object but received a 'int'
Perform changes to LHS and RHS:
>>> x += 1+2j
>>> x
(3+5j)
…but be careful manipulating the RHS:
>>> z = 1-1j
>>> z += 2j
>>> z
(1+1j) # whoops! toggled it!
Variables a and b can be ANY two values, like 0 and 1, or 117 and 711, or “heads” and “tails”. No math is used, just a quick swap of the values each time a toggle is desired.
a = True
b = False
a,b = b,a # a is now False
a,b = b,a # a is now True