First, what happens when T0 is not None? I would test that, then I would adjust the values I pass to plt.subplot(); maybe try values 131, 132, and 133, or values that depend whether or not T0 exists.
Second, after plt.show() is called, a new figure is created. To deal with this, you can
Call plt.savefig('tessstttyyy.png', dpi=100) before you call plt.show()
Save the figure before you show() by calling plt.gcf() for “get current figure”, then you can call savefig() on this Figure object at any time.
class A(object):# code for A hereclass B(object):# code for B hereclass C(A, B):def __init__(self):# What's the right code to write here to ensure # A.__init__ and B.__init__ get called?
class A(object):
# code for A here
class B(object):
# code for B here
class C(A, B):
def __init__(self):
# What's the right code to write here to ensure
# A.__init__ and B.__init__ get called?
There’s two typical approaches to writing C‘s __init__:
So what’s the correct way again? It’s easy to say “just be consistent, follow one or the other”, but if A or B are from a 3rd party library, what then? Is there an approach that can ensure that all parent class constructors get called (and in the correct order, and only once)?
Entering C
Entering A
Entering B
Leaving B
Leaving A
Entering B
Leaving B
Leaving C
Note that B‘s init gets called twice. If I do:
class A(object):
def __init__(self):
print("Entering A")
print("Leaving A")
class B(object):
def __init__(self):
print("Entering B")
super(B, self).__init__()
print("Leaving B")
class C(A, B):
def __init__(self):
print("Entering C")
super(C, self).__init__()
print("Leaving C")
Then I get:
Entering C
Entering A
Leaving A
Leaving C
Note that B‘s init never gets called. So it seems that unless I know/control the init’s of the classes I inherit from (A and B) I cannot make a safe choice for the class I’m writing (C).
Both ways work fine. The approach using super() leads to greater flexibility for subclasses.
In the direct call approach, C.__init__ can call both A.__init__ and B.__init__.
When using super(), the classes need to be designed for cooperative multiple inheritance where C calls super, which invokes A‘s code which will also call super which invokes B‘s code. See http://rhettinger.wordpress.com/2011/05/26/super-considered-super for more detail on what can be done with super.
[Response question as later edited]
So it seems that unless I know/control the init’s of the classes I
inherit from (A and B) I cannot make a safe choice for the class I’m
writing (C).
The referenced article shows how to handle this situation by adding a wrapper class around A and B. There is a worked-out example in the section titled “How to Incorporate a Non-cooperative Class”.
One might wish that multiple inheritance were easier, letting you effortlessly compose Car and Airplane classes to get a FlyingCar, but the reality is that separately designed components often need adapters or wrappers before fitting together as seamlessly as we would like :-)
One other thought: if you’re unhappy with composing functionality using multiple inheritance, you can use composition for complete control over which methods get called on which occasions.
回答 1
您问题的答案取决于一个非常重要的方面:您的基类是否设计用于多重继承?
有3种不同的方案:
基类是不相关的独立类。
如果您的基类是能够独立运行的独立实体,并且彼此之间不认识,则它们不是为多重继承而设计的。例:
classFoo:def __init__(self):
self.foo ='foo'classBar:def __init__(self, bar):
self.bar = bar
classFooBar(Foo,Bar):def __init__(self, bar='bar'):Foo.__init__(self)# explicit calls without superBar.__init__(self, bar)
用 super
classFooBar(Foo,Bar):def __init__(self, bar='bar'):
super().__init__()# this calls all constructors up to Foo
super(Foo, self).__init__(bar)# this calls all constructors after Foo up# to Bar
classFooMixin:def __init__(self,*args,**kwargs):
super().__init__(*args,**kwargs)# forwards all unused arguments
self.foo ='foo'classBar:def __init__(self, bar):
self.bar = barclassFooBar(FooMixin,Bar):def __init__(self, bar='bar'):
super().__init__(bar)# a single call is enough to invoke# all parent constructors# NOTE: `FooMixin.__init__(self, bar)` would also work, but isn't# recommended because we don't want to hard-code the parent class.
classCoopFoo:def __init__(self,**kwargs):
super().__init__(**kwargs)# forwards all unused arguments
self.foo ='foo'classCoopBar:def __init__(self, bar,**kwargs):
super().__init__(**kwargs)# forwards all unused arguments
self.bar = barclassCoopFooBar(CoopFoo,CoopBar):def __init__(self, bar='bar'):
super().__init__(bar=bar)# pass all arguments on as keyword# arguments to avoid problems with# positional arguments and the order# of the parent classes
The answer to your question depends on one very important aspect: Are your base classes designed for multiple inheritance?
There are 3 different scenarios:
The base classes are unrelated, standalone classes.
If your base classes are separate entities that are capable of functioning independently and they don’t know each other, they’re not designed for multiple inheritance. Example:
class Foo:
def __init__(self):
self.foo = 'foo'
class Bar:
def __init__(self, bar):
self.bar = bar
Important: Notice that neither Foo nor Bar calls super().__init__()! This is why your code didn’t work correctly. Because of the way diamond inheritance works in python, classes whose base class is object should not call super().__init__(). As you’ve noticed, doing so would break multiple inheritance because you end up calling another class’s __init__ rather than object.__init__(). (Disclaimer: Avoiding super().__init__() in object-subclasses is my personal recommendation and by no means an agreed-upon consensus in the python community. Some people prefer to use super in every class, arguing that you can always write an adapter if the class doesn’t behave as you expect.)
This also means that you should never write a class that inherits from object and doesn’t have an __init__ method. Not defining a __init__ method at all has the same effect as calling super().__init__(). If your class inherits directly from object, make sure to add an empty constructor like so:
class Base(object):
def __init__(self):
pass
Anyway, in this situation, you will have to call each parent constructor manually. There are two ways to do this:
Without super
class FooBar(Foo, Bar):
def __init__(self, bar='bar'):
Foo.__init__(self) # explicit calls without super
Bar.__init__(self, bar)
With super
class FooBar(Foo, Bar):
def __init__(self, bar='bar'):
super().__init__() # this calls all constructors up to Foo
super(Foo, self).__init__(bar) # this calls all constructors after Foo up
# to Bar
Each of these two methods has its own advantages and disadvantages. If you use super, your class will support dependency injection. On the other hand, it’s easier to make mistakes. For example if you change the order of Foo and Bar (like class FooBar(Bar, Foo)), you’d have to update the super calls to match. Without super you don’t have to worry about this, and the code is much more readable.
One of the classes is a mixin.
A mixin is a class that’s designed to be used with multiple inheritance. This means we don’t have to call both parent constructors manually, because the mixin will automatically call the 2nd constructor for us. Since we only have to call a single constructor this time, we can do so with super to avoid having to hard-code the parent class’s name.
Example:
class FooMixin:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) # forwards all unused arguments
self.foo = 'foo'
class Bar:
def __init__(self, bar):
self.bar = bar
class FooBar(FooMixin, Bar):
def __init__(self, bar='bar'):
super().__init__(bar) # a single call is enough to invoke
# all parent constructors
# NOTE: `FooMixin.__init__(self, bar)` would also work, but isn't
# recommended because we don't want to hard-code the parent class.
The important details here are:
The mixin calls super().__init__() and passes through any arguments it receives.
The subclass inherits from the mixin first: class FooBar(FooMixin, Bar). If the order of the base classes is wrong, the mixin’s constructor will never be called.
All base classes are designed for cooperative inheritance.
Classes designed for cooperative inheritance are a lot like mixins: They pass through all unused arguments to the next class. Like before, we just have to call super().__init__() and all parent constructors will be chain-called.
Example:
class CoopFoo:
def __init__(self, **kwargs):
super().__init__(**kwargs) # forwards all unused arguments
self.foo = 'foo'
class CoopBar:
def __init__(self, bar, **kwargs):
super().__init__(**kwargs) # forwards all unused arguments
self.bar = bar
class CoopFooBar(CoopFoo, CoopBar):
def __init__(self, bar='bar'):
super().__init__(bar=bar) # pass all arguments on as keyword
# arguments to avoid problems with
# positional arguments and the order
# of the parent classes
In this case, the order of the parent classes doesn’t matter. We might as well inherit from CoopBar first, and the code would still work the same. But that’s only true because all arguments are passed as keyword arguments. Using positional arguments would make it easy to get the order of the arguments wrong, so it’s customary for cooperative classes to accept only keyword arguments.
This is also an exception to the rule I mentioned earlier: Both CoopFoo and CoopBar inherit from object, but they still call super().__init__(). If they didn’t, there would be no cooperative inheritance.
Bottom line: The correct implementation depends on the classes you’re inheriting from.
The constructor is part of a class’s public interface. If the class is designed as a mixin or for cooperative inheritance, that must be documented. If the docs don’t mention anything of the sort, it’s safe to assume that the class isn’t designed for cooperative multiple inheritance.
Either approach (“new style” or “old style”) will work if you have control over the source code for A and B. Otherwise, use of an adapter class might be necessary.
Source code accessible: Correct use of “new style”
class A(object):
def __init__(self):
print("-> A")
super(A, self).__init__()
print("<- A")
class B(object):
def __init__(self):
print("-> B")
super(B, self).__init__()
print("<- B")
class C(A, B):
def __init__(self):
print("-> C")
# Use super here, instead of explicit calls to __init__
super(C, self).__init__()
print("<- C")
>>> C()
-> C
-> A
-> B
<- B
<- A
<- C
Here, method resolution order (MRO) dictates the following:
C(A, B) dictates A first, then B. MRO is C -> A -> B -> object.
super(A, self).__init__() continues along the MRO chain initiated in C.__init__ to B.__init__.
super(B, self).__init__() continues along the MRO chain initiated in C.__init__ to object.__init__.
You could say that this case is designed for multiple inheritance.
Source code accessible: Correct use of “old style”
class A(object):
def __init__(self):
print("-> A")
print("<- A")
class B(object):
def __init__(self):
print("-> B")
# Don't use super here.
print("<- B")
class C(A, B):
def __init__(self):
print("-> C")
A.__init__(self)
B.__init__(self)
print("<- C")
>>> C()
-> C
-> A
<- A
-> B
<- B
<- C
Here, MRO does not matter, since A.__init__ and B.__init__ are called explicitly. class C(B, A): would work just as well.
Although this case is not “designed” for multiple inheritance in the new style as the previous one was, multiple inheritance is still possible.
Now, what if A and B are from a third party library – i.e., you have no control over the source code for A and B? The short answer: You must design an adapter class that implements the necessary super calls, then use an empty class to define the MRO (see Raymond Hettinger’s article on super – especially the section, “How to Incorporate a Non-cooperative Class”).
Third-party parents: A does not implement super; B does
class A(object):
def __init__(self):
print("-> A")
print("<- A")
class B(object):
def __init__(self):
print("-> B")
super(B, self).__init__()
print("<- B")
class Adapter(object):
def __init__(self):
print("-> C")
A.__init__(self)
super(Adapter, self).__init__()
print("<- C")
class C(Adapter, B):
pass
>>> C()
-> C
-> A
<- A
-> B
<- B
<- C
Class Adapter implements super so that C can define the MRO, which comes into play when super(Adapter, self).__init__() is executed.
And what if it’s the other way around?
Third-party parents: A implements super; B does not
class A(object):
def __init__(self):
print("-> A")
super(A, self).__init__()
print("<- A")
class B(object):
def __init__(self):
print("-> B")
print("<- B")
class Adapter(object):
def __init__(self):
print("-> C")
super(Adapter, self).__init__()
B.__init__(self)
print("<- C")
class C(Adapter, A):
pass
>>> C()
-> C
-> A
<- A
-> B
<- B
<- C
Same pattern here, except the order of execution is switched in Adapter.__init__; super call first, then explicit call. Notice that each case with third-party parents requires a unique adapter class.
So it seems that unless I know/control the init’s of the classes I inherit from (A and B) I cannot make a safe choice for the class I’m writing (C).
Although you can handle the cases where you don’t control the source code of A and B by using an adapter class, it is true that you must know how the init’s of the parent classes implement super (if at all) in order to do so.
As Raymond said in his answer, a direct call to A.__init__ and B.__init__ works fine, and your code would be readable.
However, it does not use the inheritance link between C and those classes. Exploiting that link gives you more consistancy and make eventual refactorings easier and less error-prone. An example of how to do that:
class C(A, B):
def __init__(self):
print("entering c")
for base_class in C.__bases__: # (A, B)
base_class.__init__(self)
print("leaving c")
It mentions the useful method mro() that shows you the method resolution order. In your 2nd example, where you call super in A, the super call continues on in MRO. The next class in the order is B, this is why B‘s init is called the first time.
Here’s a more technical article from the official python site:
If you are multiply sub-classing classes from third party libraries, then no, there is no blind approach to calling the base class __init__ methods (or any other methods) that actually works regardless of how the base classes are programmed.
super makes it possible to write classes designed to cooperatively implement methods as part of complex multiple inheritance trees which need not be known to the class author. But there’s no way to use it to correctly inherit from arbitrary classes that may or may not use super.
Essentially, whether a class is designed to be sub-classed using super or with direct calls to the base class is a property which is part of the class’ “public interface”, and it should be documented as such. If you’re using third-party libraries in the way that the library author expected and the library has reasonable documentation, it would normally tell you what you are required to do to subclass particular things. If not, then you’ll have to look at the source code for the classes you’re sub-classing and see what their base-class-invocation convention is. If you’re combining multiple classes from one or more third-party libraries in a way that the library authors didn’t expect, then it may not be possible to consistently invoke super-class methods at all; if class A is part of a hierarchy using super and class B is part of a hierarchy that doesn’t use super, then neither option is guaranteed to work. You’ll have to figure out a strategy that happens to work for each particular case.
I have been programming in python for about two years; mostly data stuff (pandas, mpl, numpy), but also automation scripts and small web apps. I’m trying to become a better programmer and increase my python knowledge and one of the things that bothers me is that I have never used a class (outside of copying random flask code for small web apps). I generally understand what they are, but I can’t seem to wrap my head around why I would need them over a simple function.
To add specificity to my question: I write tons of automated reports which always involve pulling data from multiple data sources (mongo, sql, postgres, apis), performing a lot or a little data munging and formatting, writing the data to csv/excel/html, send it out in an email. The scripts range from ~250 lines to ~600 lines. Would there be any reason for me to use classes to do this and why?
classStudent(object):def __init__(self, name, age, gender, level, grades=None):
self.name = name
self.age = age
self.gender = gender
self.level = level
self.grades = grades or{}def setGrade(self, course, grade):
self.grades[course]= grade
def getGrade(self, course):return self.grades[course]def getGPA(self):return sum(self.grades.values())/len(self.grades)# Define some students
john =Student("John",12,"male",6,{"math":3.3})
jane =Student("Jane",12,"female",6,{"math":3.5})# Now we can get to the grades easilyprint(john.getGPA())print(jane.getGPA())
标准区
def calculateGPA(gradeDict):return sum(gradeDict.values())/len(gradeDict)
students ={}# We can set the keys to variables so we might minimize typos
name, age, gender, level, grades ="name","age","gender","level","grades"
john, jane ="john","jane"
math ="math"
students[john]={}
students[john][age]=12
students[john][gender]="male"
students[john][level]=6
students[john][grades]={math:3.3}
students[jane]={}
students[jane][age]=12
students[jane][gender]="female"
students[jane][level]=6
students[jane][grades]={math:3.5}# At this point, we need to remember who the students are and where the grades are stored. Not a huge deal, but avoided by OOP.print(calculateGPA(students[john][grades]))print(calculateGPA(students[jane][grades]))
Classes are the pillar of Object Oriented Programming. OOP is highly concerned with code organization, reusability, and encapsulation.
First, a disclaimer: OOP is partially in contrast to Functional Programming, which is a different paradigm used a lot in Python. Not everyone who programs in Python (or surely most languages) uses OOP. You can do a lot in Java 8 that isn’t very Object Oriented. If you don’t want to use OOP, then don’t. If you’re just writing one-off scripts to process data that you’ll never use again, then keep writing the way you are.
However, there are a lot of reasons to use OOP.
Some reasons:
Organization:
OOP defines well known and standard ways of describing and defining both data and procedure in code. Both data and procedure can be stored at varying levels of definition (in different classes), and there are standard ways about talking about these definitions. That is, if you use OOP in a standard way, it will help your later self and others understand, edit, and use your code. Also, instead of using a complex, arbitrary data storage mechanism (dicts of dicts or lists or dicts or lists of dicts of sets, or whatever), you can name pieces of data structures and conveniently refer to them.
State: OOP helps you define and keep track of state. For instance, in a classic example, if you’re creating a program that processes students (for instance, a grade program), you can keep all the info you need about them in one spot (name, age, gender, grade level, courses, grades, teachers, peers, diet, special needs, etc.), and this data is persisted as long as the object is alive, and is easily accessible.
Encapsulation:
With encapsulation, procedure and data are stored together. Methods (an OOP term for functions) are defined right alongside the data that they operate on and produce. In a language like Java that allows for access control, or in Python, depending upon how you describe your public API, this means that methods and data can be hidden from the user. What this means is that if you need or want to change code, you can do whatever you want to the implementation of the code, but keep the public APIs the same.
Inheritance:
Inheritance allows you to define data and procedure in one place (in one class), and then override or extend that functionality later. For instance, in Python, I often see people creating subclasses of the dict class in order to add additional functionality. A common change is overriding the method that throws an exception when a key is requested from a dictionary that doesn’t exist to give a default value based on an unknown key. This allows you to extend your own code now or later, allow others to extend your code, and allows you to extend other people’s code.
Reusability: All of these reasons and others allow for greater reusability of code. Object oriented code allows you to write solid (tested) code once, and then reuse over and over. If you need to tweak something for your specific use case, you can inherit from an existing class and overwrite the existing behavior. If you need to change something, you can change it all while maintaining the existing public method signatures, and no one is the wiser (hopefully).
Again, there are several reasons not to use OOP, and you don’t need to. But luckily with a language like Python, you can use just a little bit or a lot, it’s up to you.
An example of the student use case (no guarantee on code quality, just an example):
Object Oriented
class Student(object):
def __init__(self, name, age, gender, level, grades=None):
self.name = name
self.age = age
self.gender = gender
self.level = level
self.grades = grades or {}
def setGrade(self, course, grade):
self.grades[course] = grade
def getGrade(self, course):
return self.grades[course]
def getGPA(self):
return sum(self.grades.values())/len(self.grades)
# Define some students
john = Student("John", 12, "male", 6, {"math":3.3})
jane = Student("Jane", 12, "female", 6, {"math":3.5})
# Now we can get to the grades easily
print(john.getGPA())
print(jane.getGPA())
Standard Dict
def calculateGPA(gradeDict):
return sum(gradeDict.values())/len(gradeDict)
students = {}
# We can set the keys to variables so we might minimize typos
name, age, gender, level, grades = "name", "age", "gender", "level", "grades"
john, jane = "john", "jane"
math = "math"
students[john] = {}
students[john][age] = 12
students[john][gender] = "male"
students[john][level] = 6
students[john][grades] = {math:3.3}
students[jane] = {}
students[jane][age] = 12
students[jane][gender] = "female"
students[jane][level] = 6
students[jane][grades] = {math:3.5}
# At this point, we need to remember who the students are and where the grades are stored. Not a huge deal, but avoided by OOP.
print(calculateGPA(students[john][grades]))
print(calculateGPA(students[jane][grades]))
Whenever you need to maintain a state of your functions and it cannot be accomplished with generators (functions which yield rather than return). Generators maintain their own state.
If you want to override any of the standard operators, you need a class.
Whenever you have a use for a Visitor pattern, you’ll need classes. Every other design pattern can be accomplished more effectively and cleanly with generators, context managers (which are also better implemented as generators than as classes) and POD types (dictionaries, lists and tuples, etc.).
If you want to write “pythonic” code, you should prefer context managers and generators over classes. It will be cleaner.
If you want to extend functionality, you will almost always be able to accomplish it with containment rather than inheritance.
As every rule, this has an exception. If you want to encapsulate functionality quickly (ie, write test code rather than library-level reusable code), you can encapsulate the state in a class. It will be simple and won’t need to be reusable.
If you need a C++ style destructor (RIIA), you definitely do NOT want to use classes. You want context managers.
I think you do it right. Classes are reasonable when you need to simulate some business logic or difficult real-life processes with difficult relations.
As example:
Several functions with share state
More than one copy of the same state variables
To extend the behavior of an existing functionality
A class defines a real world entity. If you are working on something that exists individually and has its own logic that is separate from others, you should create a class for it. For example, a class that encapsulates database connectivity.
Its depends on your idea and design. if you are good designer than OOPs will come out naturally in the form of various design patterns.
For a simple script level processing OOPs can be overhead.
Simple consider the basic benefits of OOPs like reusable and extendable and make sure if they are needed or not.
OOPs make complex things simpler and simpler things complex.
Simply keeps the things simple in either way using OOPs or not Using OOPs. which ever is simpler use that.
Y=np.arange(0,2048)
X=np.arange(0,2048)
(XX_field,YY_field)=np.meshgrid(X,Y)
#I mount the X, Y and disparity in a same 3D array
stock = np.concatenate((np.expand_dims(XX_field,2),np.expand_dims(YY_field,2)),axis=2)
XY_disp = np.concatenate((stock,np.expand_dims(disp,2)),axis=2)
XY_disp_reshape = XY_disp.reshape(XY_disp.shape[0]*XY_disp.shape[1],3)
Ts = np.hstack((np.zeros((3,3)),T_0)) #i use only the translations obtained with the rectified calibration...Is it correct?# I establish the projective matrix with the homography matrix
P11 = np.dot(rectmat1,C1)
P1 = np.vstack((np.hstack((P11,np.zeros((3,1)))),np.zeros((1,4))))
P1[3,3] = 1# P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1)))))
P22 = np.dot(np.dot(rectmat2,C2),Ts)
P2 = np.vstack((P22,np.zeros((1,4))))
P2[3,3] = 1
lambda_t = cv2.norm(P1[0,:].T)/cv2.norm(P2[0,:].T)
#I define the reconstruction matrix
Q = np.zeros((4,4))
Q[0,:] = P1[0,:].T
Q[1,:] = P1[1,:].T
Q[2,:] = lambda_t*P2[1,:].T - P1[1,:].T
Q[3,:] = P1[2,:].T
#I do the calculation to get my 3D coordinates
test = []
for i inrange(0,XY_disp_reshape.shape[0]):
a = np.dot(inv(Q),np.expand_dims(np.concatenate((XY_disp_reshape[i,:],np.ones((1))),axis=0),axis=1))
test.append(a)
test = np.asarray(test)
XYZ = test[:,:,0].reshape(XY_disp.shape[0],XY_disp.shape[1],4)
I’m trying to get a depth map with an uncalibrated method.
I can obtain the fundamental matrix by finding correspondent points with SIFT and then using cv2.findFundamentalMat. I then use cv2.stereoRectifyUncalibrated to get the homography matrices for each image. Finally I use cv2.warpPerspective to rectify and compute the disparity, but this doesn’t create a good depth map. The values are very high so I’m wondering if I have to use warpPerspective or if I have to calculate a rotation matrix from the homography matrices I got with stereoRectifyUncalibrated.
I’m not sure of the projective matrix with the case of homography matrix obtained with the stereoRectifyUncalibrated to rectify.
A part of the code:
#Obtainment of the correspondent point with SIFT
sift = cv2.SIFT()
###find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(dst1,None)
kp2, des2 = sift.detectAndCompute(dst2,None)
###FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
good = []
pts1 = []
pts2 = []
###ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.8*n.distance:
good.append(m)
pts2.append(kp2[m.trainIdx].pt)
pts1.append(kp1[m.queryIdx].pt)
pts1 = np.array(pts1)
pts2 = np.array(pts2)
#Computation of the fundamental matrix
F,mask= cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)
# Obtainment of the rectification matrix and use of the warpPerspective to transform them...
pts1 = pts1[:,:][mask.ravel()==1]
pts2 = pts2[:,:][mask.ravel()==1]
pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
p1fNew = pts1.reshape((pts1.shape[0] * 2, 1))
p2fNew = pts2.reshape((pts2.shape[0] * 2, 1))
retBool ,rectmat1, rectmat2 = cv2.stereoRectifyUncalibrated(p1fNew,p2fNew,F,(2048,2048))
dst11 = cv2.warpPerspective(dst1,rectmat1,(2048,2048))
dst22 = cv2.warpPerspective(dst2,rectmat2,(2048,2048))
#calculation of the disparity
stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16*10, SADWindowSize=9)
disp = stereo.compute(dst22.astype(uint8), dst11.astype(uint8)).astype(np.float32)
plt.imshow(disp);plt.colorbar();plt.clim(0,400)#;plt.show()
plt.savefig("0gauche.png")
#plot depth by using disparity focal length `C1[0,0]` from stereo calibration and `T[0]` the distance between cameras
plt.imshow(C1[0,0]*T[0]/(disp),cmap='hot');plt.clim(-0,500);plt.colorbar();plt.show()
Here are the rectified pictures with the uncalibrated method (and warpPerspective):
Here are the rectified pictures with the calibrated method:
I don’t know how the difference is so important between the two kind of pictures. And for the calibrated method, it doesn’t seem aligned.
The disparity map using the uncalibrated method:
The depths are calculated with : C1[0,0]*T[0]/(disp)
with T from the stereoCalibrate. The values are very high.
———— EDIT LATER ————
I tried to “mount” the reconstruction matrix ([Devernay97], [Garcia01]) with the homography matrix obtained with “stereoRectifyUncalibrated”, but the result is still not good. Am I doing this correctly?
Y=np.arange(0,2048)
X=np.arange(0,2048)
(XX_field,YY_field)=np.meshgrid(X,Y)
#I mount the X, Y and disparity in a same 3D array
stock = np.concatenate((np.expand_dims(XX_field,2),np.expand_dims(YY_field,2)),axis=2)
XY_disp = np.concatenate((stock,np.expand_dims(disp,2)),axis=2)
XY_disp_reshape = XY_disp.reshape(XY_disp.shape[0]*XY_disp.shape[1],3)
Ts = np.hstack((np.zeros((3,3)),T_0)) #i use only the translations obtained with the rectified calibration...Is it correct?
# I establish the projective matrix with the homography matrix
P11 = np.dot(rectmat1,C1)
P1 = np.vstack((np.hstack((P11,np.zeros((3,1)))),np.zeros((1,4))))
P1[3,3] = 1
# P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1)))))
P22 = np.dot(np.dot(rectmat2,C2),Ts)
P2 = np.vstack((P22,np.zeros((1,4))))
P2[3,3] = 1
lambda_t = cv2.norm(P1[0,:].T)/cv2.norm(P2[0,:].T)
#I define the reconstruction matrix
Q = np.zeros((4,4))
Q[0,:] = P1[0,:].T
Q[1,:] = P1[1,:].T
Q[2,:] = lambda_t*P2[1,:].T - P1[1,:].T
Q[3,:] = P1[2,:].T
#I do the calculation to get my 3D coordinates
test = []
for i in range(0,XY_disp_reshape.shape[0]):
a = np.dot(inv(Q),np.expand_dims(np.concatenate((XY_disp_reshape[i,:],np.ones((1))),axis=0),axis=1))
test.append(a)
test = np.asarray(test)
XYZ = test[:,:,0].reshape(XY_disp.shape[0],XY_disp.shape[1],4)
import cv2
import numpy as np
import matplotlib.pyplot as plt
imgL = cv2.imread("tsukuba_l.png", cv2.IMREAD_GRAYSCALE) # left image
imgR = cv2.imread("tsukuba_r.png", cv2.IMREAD_GRAYSCALE) # right imagedefget_keypoints_and_descriptors(imgL, imgR):"""Use ORB detector and FLANN matcher to get keypoints, descritpors,
and corresponding matches that will be good for computing
homography.
"""
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(imgL, None)
kp2, des2 = orb.detectAndCompute(imgR, None)
############## Using FLANN matcher ############### Each keypoint of the first image is matched with a number of# keypoints from the second image. k=2 means keep the 2 best matches# for each keypoint (best matches = the ones with the smallest# distance measurement).
FLANN_INDEX_LSH = 6
index_params = dict(
algorithm=FLANN_INDEX_LSH,
table_number=6, # 12
key_size=12, # 20
multi_probe_level=1,
) # 2
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params, search_params)
flann_match_pairs = flann.knnMatch(des1, des2, k=2)
return kp1, des1, kp2, des2, flann_match_pairs
deflowes_ratio_test(matches, ratio_threshold=0.6):"""Filter matches using the Lowe's ratio test.
The ratio test checks if matches are ambiguous and should be
removed by checking that the two distances are sufficiently
different. If they are not, then the match at that keypoint is
ignored.
/programming/51197091/how-does-the-lowes-ratio-test-work
"""
filtered_matches = []
for m, n in matches:
if m.distance < ratio_threshold * n.distance:
filtered_matches.append(m)
return filtered_matches
defdraw_matches(imgL, imgR, kp1, des1, kp2, des2, flann_match_pairs):"""Draw the first 8 mathces between the left and right images."""# https://docs.opencv.org/4.2.0/d4/d5d/group__features2d__draw.html# https://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html
img = cv2.drawMatches(
imgL,
kp1,
imgR,
kp2,
flann_match_pairs[:8],
None,
flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS,
)
cv2.imshow("Matches", img)
cv2.imwrite("ORB_FLANN_Matches.png", img)
cv2.waitKey(0)
defcompute_fundamental_matrix(matches, kp1, kp2, method=cv2.FM_RANSAC):"""Use the set of good mathces to estimate the Fundamental Matrix.
See https://en.wikipedia.org/wiki/Eight-point_algorithm#The_normalized_eight-point_algorithm
for more info.
"""
pts1, pts2 = [], []
fundamental_matrix, inliers = None, Nonefor m in matches[:8]:
pts1.append(kp1[m.queryIdx].pt)
pts2.append(kp2[m.trainIdx].pt)
if pts1 and pts2:
# You can play with the Threshold and confidence values here# until you get something that gives you reasonable results. I# used the defaults
fundamental_matrix, inliers = cv2.findFundamentalMat(
np.float32(pts1),
np.float32(pts2),
method=method,
# ransacReprojThreshold=3,# confidence=0.99,
)
return fundamental_matrix, inliers, pts1, pts2
############## Find good keypoints to use ##############
kp1, des1, kp2, des2, flann_match_pairs = get_keypoints_and_descriptors(imgL, imgR)
good_matches = lowes_ratio_test(flann_match_pairs, 0.2)
draw_matches(imgL, imgR, kp1, des1, kp2, des2, good_matches)
############## Compute Fundamental Matrix ##############
F, I, points1, points2 = compute_fundamental_matrix(good_matches, kp1, kp2)
############## Stereo rectify uncalibrated ##############
h1, w1 = imgL.shape
h2, w2 = imgR.shape
thresh = 0
_, H1, H2 = cv2.stereoRectifyUncalibrated(
np.float32(points1), np.float32(points2), F, imgSize=(w1, h1), threshold=thresh,
)
############## Undistort (Rectify) ##############
imgL_undistorted = cv2.warpPerspective(imgL, H1, (w1, h1))
imgR_undistorted = cv2.warpPerspective(imgR, H2, (w2, h2))
cv2.imwrite("undistorted_L.png", imgL_undistorted)
cv2.imwrite("undistorted_R.png", imgR_undistorted)
############## Calculate Disparity (Depth Map) ############### Using StereoBM
stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
disparity_BM = stereo.compute(imgL_undistorted, imgR_undistorted)
plt.imshow(disparity_BM, "gray")
plt.colorbar()
plt.show()
# Using StereoSGBM# Set disparity parameters. Note: disparity range is tuned according to# specific parameters obtained through trial and error.
win_size = 2
min_disp = -4
max_disp = 9
num_disp = max_disp - min_disp # Needs to be divisible by 16
stereo = cv2.StereoSGBM_create(
minDisparity=min_disp,
numDisparities=num_disp,
blockSize=5,
uniquenessRatio=5,
speckleWindowSize=5,
speckleRange=5,
disp12MaxDiff=2,
P1=8 * 3 * win_size ** 2,
P2=32 * 3 * win_size ** 2,
)
disparity_SGBM = stereo.compute(imgL_undistorted, imgR_undistorted)
plt.imshow(disparity_SGBM, "gray")
plt.colorbar()
plt.show()
The large black areas of your calibrated rectified images would lead me to believe that for those, calibration was not done very well. There’s a variety of reasons that could be at play, maybe the physical setup, maybe lighting when you did calibration, etc., but there are plenty of camera calibration tutorials out there for that and my understanding is that you are asking for a way to get a better depth map from an uncalibrated setup (this isn’t 100% clear, but the title seems to support this and I think that’s what people will come here to try to find).
Your basic approach is correct, but the results can definitely be improved. This form of depth mapping is not among those that produce the highest quality maps (especially being uncalibrated). The biggest improvement will likely come from using a different stereo matching algorithm. The lighting may also be having a significant effect. The right image (at least to my naked eye) appears to be less well lit which could interfere with the reconstruction. You could first try brightening it to the same level as the other, or gather new images if that is possible. From here out, I’ll assume you have no access to the original cameras, so I’ll consider gathering new images, altering the setup, or performing calibration to be out of scope. (If you do have access to the setup and cameras, then I would suggest checking calibration and using a calibrated method as this will work better).
You used StereoBM for calculating your disparity (depth map) which does work, but StereoSGBM is much better suited for this application (it handles smoother edges better). You can see the difference below.
This article explains the differences in more depth:
Block matching focuses on high texture images (think a picture of a tree) and semi-global block matching will focus on sub pixel level matching and pictures with more smooth textures (think a picture of a hallway).
Without any explicit intrinsic camera parameters, specifics about the camera setup (like focal distance, distance between the cameras, distance to the subject, etc.), a known dimension in the image, or motion (to use structure from motion), you can only obtain 3D reconstruction up to a projective transform; you won’t have a sense of scale or necessarily rotation either, but you can still generate a relative depth map. You will likely suffer from some barrel and other distortions which could be removed with proper camera calibration, but you can get reasonable results without it as long as the cameras aren’t terrible (lens system isn’t too distorted) and are set up pretty close to canonical configuration (which basically means they are oriented such that their optical axes are as close to parallel as possible, and their fields of view overlap sufficiently). This doesn’t however appear to be the OPs issue as he did manage to get alright rectified images with the uncalibrated method.
Basic Procedure
Find at least 5 well-matched points in both images you can use to calculate the Fundamental Matrix (you can use any detector and matcher you like, I kept FLANN but used ORB to do detection as SIFT isn’t in the main version of OpenCV for 4.2.0)
Calculate the Fundamental Matrix, F, with findFundamentalMat
Undistort your images with stereoRectifyUncalibrated and warpPerspective
Calculate Disparity (Depth Map) with StereoSGBM
The results are much better:
Matches with ORB and FLANN
Undistorted images (left, then right)
Disparity
StereoBM
This result looks similar to the OPs problems (speckling, gaps, wrong depths in some areas).
StereoSGBM (tuned)
This result looks much better and uses roughly the same method as the OP, minus the final disparity calculation, making me think the OP would see similar improvements on his images, had they been provided.
Post filtering
There’s a good article about this in the OpenCV docs. I’d recommend looking at it if you need really smooth maps.
The example photos above are frame 1 from the scene ambush_2 in the MPI Sintel Dataset.
Full code (Tested on OpenCV 4.2.0):
import cv2
import numpy as np
import matplotlib.pyplot as plt
imgL = cv2.imread("tsukuba_l.png", cv2.IMREAD_GRAYSCALE) # left image
imgR = cv2.imread("tsukuba_r.png", cv2.IMREAD_GRAYSCALE) # right image
def get_keypoints_and_descriptors(imgL, imgR):
"""Use ORB detector and FLANN matcher to get keypoints, descritpors,
and corresponding matches that will be good for computing
homography.
"""
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(imgL, None)
kp2, des2 = orb.detectAndCompute(imgR, None)
############## Using FLANN matcher ##############
# Each keypoint of the first image is matched with a number of
# keypoints from the second image. k=2 means keep the 2 best matches
# for each keypoint (best matches = the ones with the smallest
# distance measurement).
FLANN_INDEX_LSH = 6
index_params = dict(
algorithm=FLANN_INDEX_LSH,
table_number=6, # 12
key_size=12, # 20
multi_probe_level=1,
) # 2
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params, search_params)
flann_match_pairs = flann.knnMatch(des1, des2, k=2)
return kp1, des1, kp2, des2, flann_match_pairs
def lowes_ratio_test(matches, ratio_threshold=0.6):
"""Filter matches using the Lowe's ratio test.
The ratio test checks if matches are ambiguous and should be
removed by checking that the two distances are sufficiently
different. If they are not, then the match at that keypoint is
ignored.
https://stackoverflow.com/questions/51197091/how-does-the-lowes-ratio-test-work
"""
filtered_matches = []
for m, n in matches:
if m.distance < ratio_threshold * n.distance:
filtered_matches.append(m)
return filtered_matches
def draw_matches(imgL, imgR, kp1, des1, kp2, des2, flann_match_pairs):
"""Draw the first 8 mathces between the left and right images."""
# https://docs.opencv.org/4.2.0/d4/d5d/group__features2d__draw.html
# https://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html
img = cv2.drawMatches(
imgL,
kp1,
imgR,
kp2,
flann_match_pairs[:8],
None,
flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS,
)
cv2.imshow("Matches", img)
cv2.imwrite("ORB_FLANN_Matches.png", img)
cv2.waitKey(0)
def compute_fundamental_matrix(matches, kp1, kp2, method=cv2.FM_RANSAC):
"""Use the set of good mathces to estimate the Fundamental Matrix.
See https://en.wikipedia.org/wiki/Eight-point_algorithm#The_normalized_eight-point_algorithm
for more info.
"""
pts1, pts2 = [], []
fundamental_matrix, inliers = None, None
for m in matches[:8]:
pts1.append(kp1[m.queryIdx].pt)
pts2.append(kp2[m.trainIdx].pt)
if pts1 and pts2:
# You can play with the Threshold and confidence values here
# until you get something that gives you reasonable results. I
# used the defaults
fundamental_matrix, inliers = cv2.findFundamentalMat(
np.float32(pts1),
np.float32(pts2),
method=method,
# ransacReprojThreshold=3,
# confidence=0.99,
)
return fundamental_matrix, inliers, pts1, pts2
############## Find good keypoints to use ##############
kp1, des1, kp2, des2, flann_match_pairs = get_keypoints_and_descriptors(imgL, imgR)
good_matches = lowes_ratio_test(flann_match_pairs, 0.2)
draw_matches(imgL, imgR, kp1, des1, kp2, des2, good_matches)
############## Compute Fundamental Matrix ##############
F, I, points1, points2 = compute_fundamental_matrix(good_matches, kp1, kp2)
############## Stereo rectify uncalibrated ##############
h1, w1 = imgL.shape
h2, w2 = imgR.shape
thresh = 0
_, H1, H2 = cv2.stereoRectifyUncalibrated(
np.float32(points1), np.float32(points2), F, imgSize=(w1, h1), threshold=thresh,
)
############## Undistort (Rectify) ##############
imgL_undistorted = cv2.warpPerspective(imgL, H1, (w1, h1))
imgR_undistorted = cv2.warpPerspective(imgR, H2, (w2, h2))
cv2.imwrite("undistorted_L.png", imgL_undistorted)
cv2.imwrite("undistorted_R.png", imgR_undistorted)
############## Calculate Disparity (Depth Map) ##############
# Using StereoBM
stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
disparity_BM = stereo.compute(imgL_undistorted, imgR_undistorted)
plt.imshow(disparity_BM, "gray")
plt.colorbar()
plt.show()
# Using StereoSGBM
# Set disparity parameters. Note: disparity range is tuned according to
# specific parameters obtained through trial and error.
win_size = 2
min_disp = -4
max_disp = 9
num_disp = max_disp - min_disp # Needs to be divisible by 16
stereo = cv2.StereoSGBM_create(
minDisparity=min_disp,
numDisparities=num_disp,
blockSize=5,
uniquenessRatio=5,
speckleWindowSize=5,
speckleRange=5,
disp12MaxDiff=2,
P1=8 * 3 * win_size ** 2,
P2=32 * 3 * win_size ** 2,
)
disparity_SGBM = stereo.compute(imgL_undistorted, imgR_undistorted)
plt.imshow(disparity_SGBM, "gray")
plt.colorbar()
plt.show()
# pts1 –> an array of feature points in a first camera# pts2 –> an array of feature points in a first camera# fm –> input fundamental matrix# imgSize -> size of an image# rhm1 -> output rectification homography matrix for a first image# rhm2 -> output rectification homography matrix for a second image# thres –> optional threshold used to filter out outliers
There might be several possible issues resulting in low-quality Depth Channel and Disparity Channel what leads us to low-quality stereo sequence. Here are 6 of those issues:
Possible issue I
Incomplete Formula
As a word uncalibrated implies, stereoRectifyUncalibrated instance method calculates a rectification transformations for you, in case you don’t know or can’t know intrinsic parameters of your stereo pair and its relative position in the environment.
# pts1 –> an array of feature points in a first camera
# pts2 –> an array of feature points in a first camera
# fm –> input fundamental matrix
# imgSize -> size of an image
# rhm1 -> output rectification homography matrix for a first image
# rhm2 -> output rectification homography matrix for a second image
# thres –> optional threshold used to filter out outliers
So, you do not take into account three parameters: rhm1, rhm2 and thres. If a threshold > 0, all point pairs that don’t comply with a epipolar geometry are rejected prior to computing the homographies. Otherwise, all points are considered inliers. This formula looks like this:
(pts2[i]^t * fm * pts1[i]) > thres
# t –> translation vector between coordinate systems of cameras
Thus, I believe that visual inaccuracies might appear due to an incomplete formula’s calculation.
A robust interaxial distance between left and right camera lenses must be not greater than 200 mm. When the interaxial distance is larger than the interocular distance, the effect is called hyperstereoscopy or hyperdivergence and results not only in depth exaggeration in the scene but also in viewer’s physical inconvenience. Read Autodesk’s Stereoscopic Filmmaking Whitepaper to find out more on this topic.
Possible issue III
Parallel vs Toed-In camera mode
Visual inaccuracies in resulted Disparity Map may occur due to incorrect Camera Mode calculation. Many stereographers prefer Toe-In camera mode but Pixar, for example, prefers Parallel camera mode.
Possible issue IV
Vertical Alignment
In stereoscopy, if a vertical shift occurs (even if one of the views is shifted up by 1 mm) it ruins a robust stereo experience. So, before generating Disparity Map you must be sure that left and right views of your stereo pair are accordingly aligned. Look at Technicolor Sterreoscopic Whitepaper about 15 common problems in stereo.
Stereo Rectification Matrix:
┌ ┐
| f 0 cx tx |
| 0 f cy ty | # use "ty" value to fix vertical shift in one image
| 0 0 1 0 |
└ ┘
Lens Distortion is very important topic in stereo composition. Before generating a Disparity Map you need to undistort left and right views, after this generate a disparity channel, and then redistort both views again.
Possible issue VI
Low-quality Depth channel without anti-aliasing
For creating a high-quality Disparity Map you need left and right Depth Channels that must be pre-generated. When you work in 3D package you can render a high-quality Depth Channel (with crisp edges) with just one click. But generating a high-quality depth channel from video sequence is not easy because stereo pair has to move in your environment for producing an initial data for future depth-from-motion algorithm. If there’s no motion in a frame a depth channel will be extremely poor.
Also, Depth channel itself has one more drawback – its edges do not match the edges of the RGB because it has no anti-aliasing.
Disparity channel code snippet:
Here I’d like to represent a quick approach to generate a Disparity Map:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
imageLeft = cv.imread('paris_left.png', 0)
imageRight = cv.imread('paris_right.png', 0)
stereo = cv.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(imageLeft, imageRight)
plt.imshow(disparity, 'gray')
plt.show()
You have two foreign keys to User. Django automatically creates a reverse relation from User back to GameClaim, which is usually gameclaim_set. However, because you have two FKs, you would have two gameclaim_set attributes, which is obviously impossible. So you need to tell Django what name to use for the reverse relation.
Use the related_name attribute in the FK definition. e.g.
The User model is trying to create two fields with the same name, one for the GameClaims that have that User as the target, and another for the GameClaims that have that User as the claimer. Here’s the docs on related_name, which is Django’s way of letting you set the names of the attributes so the autogenerated ones don’t conflict.
The OP isn’t using a abstract base class… but if you are, you will find that hard coding the related_name in the FK (e.g. …, related_name=”myname”) will result in a number of these conflict errors – one for each inherited class from the base class. The link provided below contains the workaround, which is simple, but definitely not obvious.
From the django docs…
If you are using the related_name
attribute on a ForeignKey or
ManyToManyField, you must always
specify a unique reverse name for the
field. This would normally cause a
problem in abstract base classes,
since the fields on this class are
included into each of the child
classes, with exactly the same values
for the attributes (including
related_name) each time.
Sometimes you have to use extra formatting in related_name
– actually, any time when inheritance is used.
class Value(models.Model):
value = models.DecimalField(decimal_places=2, max_digits=5)
animal = models.ForeignKey(
Animal, related_name="%(app_label)s_%(class)s_related")
class Meta:
abstract = True
class Height(Value):
pass
class Weigth(Value):
pass
class Length(Value):
pass
No clash here, but related_name is defined once and Django will take care for creating unique relation names.
then in children of Value class, you’ll have access to:
I seem to come across this occasionally when I add a submodule as an application to a django project, for example given the following structure:
myapp/
myapp/module/
myapp/module/models.py
If I add the following to INSTALLED_APPS:
'myapp',
'myapp.module',
Django seems to process the myapp.mymodule models.py file twice and throws the above error. This can be resolved by not including the main module in the INSTALLED_APPS list:
'myapp.module',
Including the myapp instead of myapp.module causes all the database tables to be created with incorrect names, so this seems to be the correct way to do it.
I came across this post while looking for a solution to this problem so figured I’d put this here :)
I want to write out a file name with the exact date and time, it is a xml file, that the program has already create, I just need to name the file. The above code is not working.
[EDITED] – The error returned
File "./fix.py", line 226, in <module>
filenames = datetime.now().strftime("%Y%m%d-%H%M%S")
AttributeError: 'module' object has no attribute 'now'
I’m surprised there is not some single formatter that returns a default (and safe) ‘for appending in filename’ – format of the time,
We could simply write FD.write('mybackup'+time.strftime('%(formatter here)') + 'ext'
"%x" instead of "%Y%m%d-%H%M%S"
回答 5
我需要在文件夹名称中包括日期时间戳,以便从Web刮板转储文件。
# import time and OS modules to use to build file folder nameimport time
import os
# Build string for directory to hold files# Output Configuration# drive_letter = Output device location (hard drive) # folder_name = directory (folder) to receive and store PDF files
drive_letter = r'D:\\'
folder_name = r'downloaded-files'
folder_time = datetime.now().strftime("%Y-%m-%d_%I-%M-%S_%p")
folder_to_save_files = drive_letter + folder_name + folder_time
# IF no such folder exists, create one automaticallyifnot os.path.exists(folder_to_save_files):
os.mkdir(folder_to_save_files)
Here’s some that I needed to include the date-time stamp in the folder name for dumping files from a web scraper.
# import time and OS modules to use to build file folder name
import time
import os
# Build string for directory to hold files
# Output Configuration
# drive_letter = Output device location (hard drive)
# folder_name = directory (folder) to receive and store PDF files
drive_letter = r'D:\\'
folder_name = r'downloaded-files'
folder_time = datetime.now().strftime("%Y-%m-%d_%I-%M-%S_%p")
folder_to_save_files = drive_letter + folder_name + folder_time
# IF no such folder exists, create one automatically
if not os.path.exists(folder_to_save_files):
os.mkdir(folder_to_save_files)
In [1]: import matplotlib.pyplot as p
In [2]: p.plot(range(20),range(20))
Out[2]: [<matplotlib.lines.Line2D object at 0xa64932c>]
In [3]: p.show()
If you edit ~/.matplotlib/matplotlibrc and change the backend to something like GtkAgg, you should see a plot. You can list all the backends available on your machine with
import matplotlib.rcsetup as rcsetup
print(rcsetup.all_backends)
I ran into the exact same problem on Ubuntu 12.04, because I installed matplotlib (within a virtualenv) using
pip install matplotlib
To make long story short, my advice is: don’t try to install matplotlib using pip or by hand; let a real package manager (e.g. apt-get / synaptic) install it and all its dependencies for you.
Unfortunately, matplotlib’s backends (alternative methods for actually rendering your plots) have all sorts of dependencies that pip will not deal with. Even worse, it fails silently; that is, pip install matplotlib appears to install matplotlib successfully. But when you try to use it (e.g. pyplot.show()), no plot window will appear. I tried all the different backends that people on the web suggest (Qt4Agg, GTK, etc.), and they all failed (i.e. when I tried to import matplotlib.pyplot, I get ImportError because it’s trying to import some dependency that’s missing). I then researched how to install those dependencies, but it just made me want to give up using pip (within virtualenv) as a viable installation solution for any package that has non-Python package dependencies.
The whole experience sent me crawling back to apt-get / synaptic (i.e. the Ubuntu package manager) to install software like matplotlib. That worked perfectly. Of course, that means you can only install into your system directories, no virtualenv goodness, and you are stuck with the versions that Ubuntu distributes, which may be way behind the current version…
I have encountered the same problem — pylab was not showing under ipython. The problem was fixed by changing ipython’s config file {ipython_config.py}. In the config file
Similar to @Rikki, I solved this problem by upgrading matplotlib with pip install matplotlib --upgrade. If you can’t upgrade uninstalling and reinstalling may work.
For me the problem happens if I simply create an emptymatplotlibrc file under ~/.matplotlib on macOS. Adding “backend: macosx” in it fixes the problem.
I think it is a bug: if backend is not specified in my matplotlibrc it should take the default value.
I want to slice a NumPy nxn array. I want to extract an arbitrary selection of m rows and columns of that array (i.e. without any pattern in the numbers of rows/columns), making it a new, mxm array. For this example let us say the array is 4×4 and I want to extract a 2×2 array from it.
Here is our array:
from numpy import *
x = range(16)
x = reshape(x,(4,4))
print x
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
The line and columns to remove are the same. The easiest case is when I want to extract a 2×2 submatrix that is at the beginning or at the end, i.e. :
In [33]: x[0:2,0:2]
Out[33]:
array([[0, 1],
[4, 5]])
In [34]: x[2:,2:]
Out[34]:
array([[10, 11],
[14, 15]])
But what if I need to remove another mixture of rows/columns? What if I need to remove the first and third lines/rows, thus extracting the submatrix [[5,7],[13,15]]? There can be any composition of rows/lines. I read somewhere that I just need to index my array using arrays/lists of indices for both rows and columns, but that doesn’t seem to work:
In [35]: x[[1,3],[1,3]]
Out[35]: array([ 5, 15])
I found one way, which is:
In [61]: x[[1,3]][:,[1,3]]
Out[61]:
array([[ 5, 7],
[13, 15]])
First issue with this is that it is hardly readable, although I can live with that. If someone has a better solution, I’d certainly like to hear it.
Other thing is I read on a forum that indexing arrays with arrays forces NumPy to make a copy of the desired array, thus when treating with large arrays this could become a problem. Why is that so / how does this mechanism work?
As Sven mentioned, x[[[0],[2]],[1,3]] will give back the 0 and 2 rows that match with the 1 and 3 columns while x[[0,2],[1,3]] will return the values x[0,1] and x[2,3] in an array.
There is a helpful function for doing the first example I gave, numpy.ix_. You can do the same thing as my first example with x[numpy.ix_([0,2],[1,3])]. This can save you from having to enter in all of those extra brackets.
To answer this question, we have to look at how indexing a multidimensional array works in Numpy. Let’s first say you have the array x from your question. The buffer assigned to x will contain 16 ascending integers from 0 to 15. If you access one element, say x[i,j], NumPy has to figure out the memory location of this element relative to the beginning of the buffer. This is done by calculating in effect i*x.shape[1]+j (and multiplying with the size of an int to get an actual memory offset).
If you extract a subarray by basic slicing like y = x[0:2,0:2], the resulting object will share the underlying buffer with x. But what happens if you acces y[i,j]? NumPy can’t use i*y.shape[1]+j to calculate the offset into the array, because the data belonging to y is not consecutive in memory.
NumPy solves this problem by introducing strides. When calculating the memory offset for accessing x[i,j], what is actually calculated is i*x.strides[0]+j*x.strides[1] (and this already includes the factor for the size of an int):
x.strides
(16, 4)
When y is extracted like above, NumPy does not create a new buffer, but it does create a new array object referencing the same buffer (otherwise y would just be equal to x.) The new array object will have a different shape then x and maybe a different starting offset into the buffer, but will share the strides with x (in this case at least):
y.shape
(2,2)
y.strides
(16, 4)
This way, computing the memory offset for y[i,j] will yield the correct result.
But what should NumPy do for something like z=x[[1,3]]? The strides mechanism won’t allow correct indexing if the original buffer is used for z. NumPy theoretically could add some more sophisticated mechanism than the strides, but this would make element access relatively expensive, somehow defying the whole idea of an array. In addition, a view wouldn’t be a really lightweight object anymore.
I don’t think that x[[1,3]][:,[1,3]] is hardly readable. If you want to be more clear on your intent, you can do:
a[[1,3],:][:,[1,3]]
I am not an expert in slicing but typically, if you try to slice into an array and the values are continuous, you get back a view where the stride value is changed.
e.g. In your inputs 33 and 34, although you get a 2×2 array, the stride is 4. Thus, when you index the next row, the pointer moves to the correct position in memory.
Clearly, this mechanism doesn’t carry well into the case of an array of indices. Hence, numpy will have to make the copy. After all, many other matrix math function relies on size, stride and continuous memory allocation.
If you wish to select arbitrary rows and columns, then you can’t use basic slicing. You’ll have to use advanced indexing, using something like x[rows,:][:,columns], where rows and columns are sequences. This of course is going to give you a copy, not a view, of your original array. This is as one should expect, since a numpy array uses contiguous memory (with constant strides), and there would be no way to generate a view with arbitrary rows and columns (since that would require non-constant strides).
With numpy, you can pass a slice for each component of the index – so, your x[0:2,0:2] example above works.
If you just want to evenly skip columns or rows, you can pass slices with three components
(i.e. start, stop, step).
Again, for your example above:
>>> x[1:4:2, 1:4:2]
array([[ 5, 7],
[13, 15]])
Which is basically: slice in the first dimension, with start at index 1, stop when index is equal or greater than 4, and add 2 to the index in each pass. The same for the second dimension. Again: this only works for constant steps.
The syntax you got to do something quite different internally – what x[[1,3]][:,[1,3]] actually does is create a new array including only rows 1 and 3 from the original array (done with the x[[1,3]] part), and then re-slice that – creating a third array – including only columns 1 and 3 of the previous array.
classPerson:def __init__(self, name, age):
self.name = name
self.age = age
peasant =Person("Dennis",37)# PyCharm knows that the "peasant" variable is of type Person
peasant.dig_filth()# shows warning -- Person doesn't have a dig_filth methodclassKing:def repress(self, peasant):# PyCharm has no idea what type the "peasant" parameter should be
peasant.knock_over()# no warning even though knock_over doesn't existKing().repress(peasant)# Even if I call the method once with a Person instance, PyCharm doesn't# consider that to mean that the "peasant" parameter should always be a Person
classKing:def repress(self, peasant):"""
Exploit the workers by hanging on to outdated imperialist dogma which
perpetuates the economic and social differences in our society.
@type peasant: Person
@param peasant: Person to repress.
"""
peasant.knock_over()# Shows a warning. And there was much rejoicing.
When it comes to constructors, and assignments, and method calls, the PyCharm IDE is pretty good at analyzing my source code and figuring out what type each variable should be. I like it when it’s right, because it gives me good code-completion and parameter info, and it gives me warnings if I try to access an attribute that doesn’t exist.
But when it comes to parameters, it knows nothing. The code-completion dropdowns can’t show anything, because they don’t know what type the parameter will be. The code analysis can’t look for warnings.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
peasant = Person("Dennis", 37)
# PyCharm knows that the "peasant" variable is of type Person
peasant.dig_filth() # shows warning -- Person doesn't have a dig_filth method
class King:
def repress(self, peasant):
# PyCharm has no idea what type the "peasant" parameter should be
peasant.knock_over() # no warning even though knock_over doesn't exist
King().repress(peasant)
# Even if I call the method once with a Person instance, PyCharm doesn't
# consider that to mean that the "peasant" parameter should always be a Person
This makes a certain amount of sense. Other call sites could pass anything for that parameter. But if my method expects a parameter to be of type, say, pygame.Surface, I’d like to be able to indicate that to PyCharm somehow, so it can show me all of Surface‘s attributes in its code-completion dropdown, and highlight warnings if I call the wrong method, and so on.
Is there a way I can give PyCharm a hint, and say “psst, this parameter is supposed to be of type X”? (Or perhaps, in the spirit of dynamic languages, “this parameter is supposed to quack like an X”? I’d be fine with that.)
EDIT: CrazyCoder’s answer, below, does the trick. For any newcomers like me who want the quick summary, here it is:
class King:
def repress(self, peasant):
"""
Exploit the workers by hanging on to outdated imperialist dogma which
perpetuates the economic and social differences in our society.
@type peasant: Person
@param peasant: Person to repress.
"""
peasant.knock_over() # Shows a warning. And there was much rejoicing.
The relevant part is the @type peasant: Person line of the docstring.
If you also go to File > Settings > Python Integrated Tools and set “Docstring format” to “Epytext”, then PyCharm’s View > Quick Documentation Lookup will pretty-print the parameter information instead of just printing all the @-lines as-is.
Yes, you can use special documentation format for methods and their parameters so that PyCharm can know the type. Recent PyCharm version supports most common doc formats.
classKing:def repress(self, peasant:Person)-> bool:
peasant.knock_over()# Shows a warning. And there was much rejoicing.return peasant.badly_hurt()# Lets say, its not known from here that this method will always return a bool
If you are using Python 3.0 or later, you can also use annotations on functions and parameters. PyCharm will interpret these as the type the arguments or return values are expected to have:
class King:
def repress(self, peasant: Person) -> bool:
peasant.knock_over() # Shows a warning. And there was much rejoicing.
return peasant.badly_hurt() # Lets say, its not known from here that this method will always return a bool
Sometimes this is useful for non-public methods, that do not need a docstring. As an added benefit, those annotations can be accessed by code:
Update: As of PEP 484, which has been accepted for Python 3.5, it is also the official convention to specify argument and return types using annotations.
classKing:def repress(self, peasant):"""
Exploit the workers by hanging on to outdated imperialist dogma which
perpetuates the economic and social differences in our society.
@type peasant: Person
@param peasant: Person to repress.
"""
peasant.knock_over()# Shows a warning. And there was much rejoicing.
PyCharm extracts types from a @type pydoc string. See PyCharm docs here and here, and Epydoc docs. It’s in the ‘legacy’ section of PyCharm, perhaps it lacks some functionality.
class King:
def repress(self, peasant):
"""
Exploit the workers by hanging on to outdated imperialist dogma which
perpetuates the economic and social differences in our society.
@type peasant: Person
@param peasant: Person to repress.
"""
peasant.knock_over() # Shows a warning. And there was much rejoicing.
The relevant part is the @type peasant: Person line of the docstring.
My intention is not to steal points from CrazyCoder or the original questioner, by all means give them their points. I just thought the simple answer should be in an ‘answer’ slot.
回答 3
我正在使用PyCharm Professional 2016.1编写py2.6-2.7代码,发现使用reStructuredText可以以更简洁的方式表达类型:
classReplicant(object):passclassHunter(object):def retire(self, replicant):""" Retire the rogue or non-functional replicant.
:param Replicant replicant: the replicant to retire.
"""
replicant.knock_over()# Shows a warning.
I’m using PyCharm Professional 2016.1 writing py2.6-2.7 code, and I found that using reStructuredText I can express types in a more succint way:
class Replicant(object):
pass
class Hunter(object):
def retire(self, replicant):
""" Retire the rogue or non-functional replicant.
:param Replicant replicant: the replicant to retire.
"""
replicant.knock_over() # Shows a warning.