Is there a built-in that removes duplicates from list in Python, whilst preserving order? I know that I can use a set to remove duplicates, but that destroys the original order. I also know that I can roll my own like this:
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
Why assign seen.add to seen_add instead of just calling seen.add? Python is a dynamic language, and resolving seen.add each iteration is more costly than resolving a local variable. seen.add could have changed between iterations, and the runtime isn’t smart enough to rule that out. To play it safe, it has to check the object each time.
O(1) insertion, deletion and member-check per operation.
(Small additional note: seen.add() always returns None, so the or above is there only as a way to attempt a set update, and not as an integral part of the logical test.)
def unique_everseen(iterable, key=None):"List unique elements, preserving order. Remember all elements ever seen."# unique_everseen('AAAABBBCCDAABBB') --> A B C D# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.addif key isNone:for element in filterfalse(seen.__contains__, iterable):
seen_add(element)yield elementelse:for element in iterable:
k = key(element)if k notin seen:
seen_add(k)yield element
As Raymond pointed out, in python 3.5+ where OrderedDict is implemented in C, the list comprehension approach will be slower than OrderedDict (unless you actually need the list at the end – and even then, only if the input is very short). So the best solution for 3.5+ is OrderedDict.
Important Edit 2015
As @abarnert notes, the more_itertools library (pip install more_itertools) contains a unique_everseen function that is built to solve this problem without any unreadable (not seen.add) mutations in list comprehensions. This is also the fastest solution too:
Just one simple library import and no hacks.
This comes from an implementation of the itertools recipe unique_everseen which looks like:
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
In Python 2.7+ the accepted common idiom (which works but isn’t optimized for speed, I would now use unique_everseen) for this uses collections.OrderedDict:
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
Response to @max: Once you move to 3.6 or 3.7 and use the regular dict instead of OrderedDict, you can’t really beat the performance in any other way. The dictionary is dense and readily converts to a list with almost no overhead. The target list is pre-sized to len(d) which saves all the resizes that occur in a list comprehension. Also, since the internal key list is dense, copying the pointers is about almost fast as a list copy.
回答 3
sequence =['1','2','3','3','6','4','5','6']
unique =[][unique.append(item)for item in sequence if item notin unique]
Not to kick a dead horse (this question is very old and already has lots of good answers), but here is a solution using pandas that is quite fast in many circumstances and is dead simple to use.
from itertools import groupby
[ key for key,_ in groupby(sortedList)]
The list doesn’t even have to be sorted, the sufficient condition is that equal values are grouped together.
Edit: I assumed that “preserving order” implies that the list is actually ordered. If this is not the case, then the solution from MizardX is the right one.
Community edit: This is however the most elegant way to “compress duplicate consecutive elements into a single element”.
In Python 3.7 and above, dictionaries are guaranteed to remember their key insertion order. The answer to this question summarizes the current state of affairs.
The OrderedDict solution thus becomes obsolete and without any import statements we can simply issue:
def unique(iterable):
seen = set()
seen_add = seen.add
for element in itertools.ifilterfalse(seen.__contains__, iterable):
seen_add(element)yield element
For another very late answer to another very old question:
The itertools recipes have a function that does this, using the seen set technique, but:
Handles a standard key function.
Uses no unseemly hacks.
Optimizes the loop by pre-binding seen.add instead of looking it up N times. (f7 also does this, but some versions don’t.)
Optimizes the loop by using ifilterfalse, so you only have to loop over the unique elements in Python, instead of all of them. (You still iterate over all of them inside ifilterfalse, of course, but that’s in C, and much faster.)
Is it actually faster than f7? It depends on your data, so you’ll have to test it and see. If you want a list in the end, f7 uses a listcomp, and there’s no way to do that here. (You can directly append instead of yielding, or you can feed the generator into the list function, but neither one can be as fast as the LIST_APPEND inside a listcomp.) At any rate, usually, squeezing out a few microseconds is not going to be as important as having an easily-understandable, reusable, already-written function that doesn’t require DSU when you want to decorate.
As with all of the recipes, it’s also available in more-iterools.
If you just want the no-key case, you can simplify it as:
def unique(iterable):
seen = set()
seen_add = seen.add
for element in itertools.ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
%matplotlib notebook
from iteration_utilities import unique_everseen
from collections importOrderedDictfrom more_itertools import unique_everseen as mi_unique_everseen
def f7(seq):
seen = set()
seen_add = seen.add
return[x for x in seq ifnot(x in seen or seen_add(x))]def iteration_utilities_unique_everseen(seq):return list(unique_everseen(seq))def more_itertools_unique_everseen(seq):return list(mi_unique_everseen(seq))def odict(seq):return list(OrderedDict.fromkeys(seq))from simple_benchmark import benchmark
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],{2**i: list(range(2**i))for i in range(1,20)},'list size (no duplicates)')
b.plot()
并确保我也进行了更多重复的测试,以检查是否有所不同:
import random
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],{2**i:[random.randint(0,2**(i-1))for _ in range(2**i)]for i in range(1,20)},'list size (lots of duplicates)')
b.plot()
还有一个仅包含一个值:
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],{2**i:[1]*(2**i)for i in range(1,20)},'list size (only duplicates)')
b.plot()
I did some timings (Python 3.6) and these show that it’s faster than all other alternatives I tested, including OrderedDict.fromkeys, f7 and more_itertools.unique_everseen:
%matplotlib notebook
from iteration_utilities import unique_everseen
from collections import OrderedDict
from more_itertools import unique_everseen as mi_unique_everseen
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
def iteration_utilities_unique_everseen(seq):
return list(unique_everseen(seq))
def more_itertools_unique_everseen(seq):
return list(mi_unique_everseen(seq))
def odict(seq):
return list(OrderedDict.fromkeys(seq))
from simple_benchmark import benchmark
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: list(range(2**i)) for i in range(1, 20)},
'list size (no duplicates)')
b.plot()
And just to make sure I also did a test with more duplicates just to check if it makes a difference:
import random
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(1, 20)},
'list size (lots of duplicates)')
b.plot()
And one containing only one value:
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [1]*(2**i) for i in range(1, 20)},
'list size (only duplicates)')
b.plot()
In all of these cases the iteration_utilities.unique_everseen function is the fastest (on my computer).
This iteration_utilities.unique_everseen function can also handle unhashable values in the input (however with an O(n*n) performance instead of the O(n) performance when the values are hashable).
In[122]:%timeit unique(np.random.randint(5, size=(1)))10000 loops, best of 3:25.3 us per loop
In[123]:%timeit unique(np.random.randint(5, size=(10)))10000 loops, best of 3:42.9 us per loop
In[124]:%timeit unique(np.random.randint(5, size=(100)))10000 loops, best of 3:132 us per loop
In[125]:%timeit unique(np.random.randint(5, size=(1000)))1000 loops, best of 3:1.05 ms per loop
In[126]:%timeit unique(np.random.randint(5, size=(10000)))100 loops, best of 3:11 ms per loop
In [118]: unique([1,5,1,1,4,3,4])
Out[118]: [1, 5, 4, 3]
I tried it for growing data sizes and saw sub-linear time-complexity (not definitive, but suggests this should be fine for normal data).
In [122]: %timeit unique(np.random.randint(5, size=(1)))
10000 loops, best of 3: 25.3 us per loop
In [123]: %timeit unique(np.random.randint(5, size=(10)))
10000 loops, best of 3: 42.9 us per loop
In [124]: %timeit unique(np.random.randint(5, size=(100)))
10000 loops, best of 3: 132 us per loop
In [125]: %timeit unique(np.random.randint(5, size=(1000)))
1000 loops, best of 3: 1.05 ms per loop
In [126]: %timeit unique(np.random.randint(5, size=(10000)))
100 loops, best of 3: 11 ms per loop
I also think it’s interesting that this could be readily generalized to uniqueness by other operations. Like this:
For example, you could pass in a function that uses the notion of rounding to the same integer as if it was “equality” for uniqueness purposes, like this:
def test_round(x,y):
return round(x) != round(y)
then unique(some_list, test_round) would provide the unique elements of the list where uniqueness no longer meant traditional equality (which is implied by using any sort of set-based or dict-key-based approach to this problem) but instead meant to take only the first element that rounds to K for each possible integer K that the elements might round to, e.g.:
>>> l =[5,6,6,1,1,2,2,3,4]>>> reduce(lambda r, v: v in r[1]and r or(r[0].append(v)or r[1].add(v))or r, l,([], set()))[0][5,6,1,2,3,4]
说明:
default =(list(), set())# use list to keep order# use set to make lookup fasterdef reducer(result, item):if item notin result[1]:
result[0].append(item)
result[1].add(item)return result
>>> reduce(reducer, l, default)[0][5,6,1,2,3,4]
>>> l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
Explanation:
default = (list(), set())
# use list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
>>> reduce(reducer, l, default)[0]
[5, 6, 1, 2, 3, 4]
You can reference a list comprehension as it is being built by the symbol ‘_[1]’. For example, the following function unique-ifies a list of elements without changing their order by referencing its list comprehension.
def unique(my_list):
return [x for x in my_list if x not in locals()['_[1]']]
Demo:
l1 = [1, 2, 3, 4, 1, 2, 3, 4, 5]
l2 = [x for x in l1 if x not in locals()['_[1]']]
print l2
Output:
[1, 2, 3, 4, 5]
回答 14
MizardX的答案很好地总结了多种方法。
这是我在大声思考时想到的:
mylist =[x for i,x in enumerate(mylist)if x notin mylist[i+1:]]
import pandas as pd
import numpy as np
uniquifier =lambda alist: pd.Series(alist).drop_duplicates().tolist()# from the chosen answer def f7(seq):
seen = set()
seen_add = seen.add
return[ x for x in seq ifnot(x in seen or seen_add(x))]
alist = np.random.randint(low=0, high=1000, size=10000).tolist()print uniquifier(alist)== f7(alist)# True
定时:
In[104]:%timeit f7(alist)1000 loops, best of 3:1.3 ms per loop
In[110]:%timeit uniquifier(alist)100 loops, best of 3:4.39 ms per loop
If you routinely use pandas, and aesthetics is preferred over performance, then consider the built-in function pandas.Series.drop_duplicates:
import pandas as pd
import numpy as np
uniquifier = lambda alist: pd.Series(alist).drop_duplicates().tolist()
# from the chosen answer
def f7(seq):
seen = set()
seen_add = seen.add
return [ x for x in seq if not (x in seen or seen_add(x))]
alist = np.random.randint(low=0, high=1000, size=10000).tolist()
print uniquifier(alist) == f7(alist) # True
Timing:
In [104]: %timeit f7(alist)
1000 loops, best of 3: 1.3 ms per loop
In [110]: %timeit uniquifier(alist)
100 loops, best of 3: 4.39 ms per loop
this will preserve order and run in O(n) time. basically the idea is to create a hole wherever there is a duplicate found and sink it down to the bottom. makes use of a read and write pointer. whenever a duplicate is found only the read pointer advances and write pointer stays on the duplicate entry to overwrite it.
text ="ask not what your country can do for you ask what you can do for your country"
sentence = text.split(" ")
noduplicates =[(sentence[i])for i in range (0,len(sentence))if sentence[i]notin sentence[:i]]print(noduplicates)
A solution without using imported modules or sets:
text = "ask not what your country can do for you ask what you can do for your country"
sentence = text.split(" ")
noduplicates = [(sentence[i]) for i in range (0,len(sentence)) if sentence[i] not in sentence[:i]]
print(noduplicates)
This method is quadratic, because we have a linear lookup into the list for every element of the list (to that we have to add the cost of rearranging the list because of the del s).
That said, it is possible to operate in place if we start from the end of the list and proceed toward the origin removing each term that is present in the sub-list at its left
This idea in code is simply
for i in range(len(l)-1,0,-1):
if l[i] in l[:i]: del l[i]
defDelDupes(aseq):
seen = set()return[x for x in aseq if(x.lower()notin seen)and(not seen.add(x.lower()))]
紧密相关的功能是:
defHasDupes(aseq):
s = set()return any(((x.lower()in s)or s.add(x.lower()))for x in aseq)defGetDupes(aseq):
s = set()return set(x for x in aseq if((x.lower()in s)or s.add(x.lower())))
zmk’s approach uses list comprehension which is very fast, yet keeps the order naturally. For applying to case sensitive strings it can be easily modified. This also preserves the original case.
def DelDupes(aseq) :
seen = set()
return [x for x in aseq if (x.lower() not in seen) and (not seen.add(x.lower()))]
Closely associated functions are:
def HasDupes(aseq) :
s = set()
return any(((x.lower() in s) or s.add(x.lower())) for x in aseq)
def GetDupes(aseq) :
s = set()
return set(x for x in aseq if ((x.lower() in s) or s.add(x.lower())))