问题:如何在保留订单的同时从列表中删除重复项?
是否有内置的程序在保留顺序的同时从Python列表中删除重复项?我知道我可以使用集合来删除重复项,但这会破坏原始顺序。我也知道我可以这样滚动自己:
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
(感谢您放松该代码示例。)
但是如果可能的话,我想利用一个内置的或更Pythonic的习惯用法。
相关问题:在Python中,从列表中删除重复项以使所有元素在保持顺序唯一的同时最快的算法是什么?
回答 0
在这里,您有一些选择:http : //www.peterbe.com/plog/uniqifiers-benchmark
最快的:
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
为什么要分配seen.add
给seen_add
而不是仅打电话给seen.add
?Python是一种动态语言,与解决seen.add
局部变量相比,解决每次迭代的成本更高。seen.add
可能在两次迭代之间发生了变化,并且运行时不够智能,无法排除这种情况。为了安全起见,它必须每次检查对象。
如果您打算在同一数据集上大量使用此功能,则最好使用有序集:http : //code.activestate.com/recipes/528878/
O(1)每个操作的插入,删除和成员检查。
(小的附加说明:seen.add()
始终返回None
,因此or
以上内容仅是尝试进行集合更新的一种方法,而不是逻辑测试的组成部分。)
Here you have some alternatives: http://www.peterbe.com/plog/uniqifiers-benchmark
Fastest one:
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
Why assign seen.add
to seen_add
instead of just calling seen.add
? Python is a dynamic language, and resolving seen.add
each iteration is more costly than resolving a local variable. seen.add
could have changed between iterations, and the runtime isn’t smart enough to rule that out. To play it safe, it has to check the object each time.
If you plan on using this function a lot on the same dataset, perhaps you would be better off with an ordered set: http://code.activestate.com/recipes/528878/
O(1) insertion, deletion and member-check per operation.
(Small additional note: seen.add()
always returns None
, so the or
above is there only as a way to attempt a set update, and not as an integral part of the logical test.)
回答 1
编辑2016
正如Raymond所指出的那样,在OrderedDict
C语言中实现的python 3.5+中,列表理解方法将比OrderedDict
(除非您实际上需要列表的末尾-甚至在输入非常短的情况下)慢一些。因此,针对3.5+的最佳解决方案是OrderedDict
。
重要编辑2015
如@abarnert所述,库(pip install more_itertools
)包含一个unique_everseen
为解决此问题而构建的函数,列表理解中没有任何不可读的(not seen.add
)突变。这也是最快的解决方案:
>>> from more_itertools import unique_everseen
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(unique_everseen(items))
[1, 2, 0, 3]
只需导入一个简单的库,就不会有黑客入侵。这来自itertools配方的实现,unique_everseen
如下所示:
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
在Python中2.7+
,这种用法是公认的通用用法(它可以工作,但没有针对速度进行优化,我现在将使用unique_everseen
)collections.OrderedDict
:
运行时间:O(N)
>>> from collections import OrderedDict
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(OrderedDict.fromkeys(items))
[1, 2, 0, 3]
看起来比:
seen = set()
[x for x in seq if x not in seen and not seen.add(x)]
并且没有利用丑陋的hack:
not seen.add(x)
这取决于set.add
始终返回的就地方法的事实,None
因此not None
求值为True
。
但是请注意,尽管破解解决方案具有相同的运行时复杂度O(N),但其原始速度更快。
Edit 2016
As Raymond pointed out, in python 3.5+ where OrderedDict
is implemented in C, the list comprehension approach will be slower than OrderedDict
(unless you actually need the list at the end – and even then, only if the input is very short). So the best solution for 3.5+ is OrderedDict
.
Important Edit 2015
As @abarnert notes, the library (pip install more_itertools
) contains a unique_everseen
function that is built to solve this problem without any unreadable (not seen.add
) mutations in list comprehensions. This is also the fastest solution too:
>>> from more_itertools import unique_everseen
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(unique_everseen(items))
[1, 2, 0, 3]
Just one simple library import and no hacks. This comes from an implementation of the itertools recipe unique_everseen
which looks like:
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
In Python 2.7+
the accepted common idiom (which works but isn’t optimized for speed, I would now use unique_everseen
) for this uses collections.OrderedDict
:
Runtime: O(N)
>>> from collections import OrderedDict
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(OrderedDict.fromkeys(items))
[1, 2, 0, 3]
This looks much nicer than:
seen = set()
[x for x in seq if x not in seen and not seen.add(x)]
and doesn’t utilize the ugly hack:
not seen.add(x)
which relies on the fact that set.add
is an in-place method that always returns None
so not None
evaluates to True
.
Note however that the hack solution is faster in raw speed though it has the same runtime complexity O(N).
回答 2
在Python 2.7中,从迭代器中删除重复项并同时保持其原始顺序的新方法是:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
在Python 3.5中,OrderedDict具有C实现。我的时间表明,这是Python 3.5各种方法中最快也是最短的。
在Python 3.6中,常规字典变得有序且紧凑。(此功能适用于CPython和PyPy,但在其他实现中可能不存在)。这为我们提供了一种在保留订单的同时进行重复数据删除的最快方法:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
在Python 3.7中,保证常规dict在所有实现中都排序。 因此,最短,最快的解决方案是:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
对@max的响应:一旦移至3.6或3.7并使用常规dict而不是OrderedDict,就无法真正以其他任何方式击败性能。该词典非常密集,几乎可以无开销地转换为列表。目标列表的大小预先设置为len(d),这样可以保存列表推导中发生的所有调整大小。同样,由于内部键列表很密集,因此指针的复制几乎快如列表副本。
In Python 2.7, the new way of removing duplicates from an iterable while keeping it in the original order is:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.7, the regular dict is guaranteed to both ordered across all implementations. So, the shortest and fastest solution is:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
Response to @max: Once you move to 3.6 or 3.7 and use the regular dict instead of OrderedDict, you can’t really beat the performance in any other way. The dictionary is dense and readily converts to a list with almost no overhead. The target list is pre-sized to len(d) which saves all the resizes that occur in a list comprehension. Also, since the internal key list is dense, copying the pointers is about almost fast as a list copy.
回答 3
sequence = ['1', '2', '3', '3', '6', '4', '5', '6']
unique = []
[unique.append(item) for item in sequence if item not in unique]
独特→ ['1', '2', '3', '6', '4', '5']
sequence = ['1', '2', '3', '3', '6', '4', '5', '6']
unique = []
[unique.append(item) for item in sequence if item not in unique]
unique → ['1', '2', '3', '6', '4', '5']
回答 4
不要踢死马(这个问题很老了,已经有很多好的答案了),但是这里有一种使用熊猫的解决方案,在很多情况下都非常快,而且使用起来很简单。
import pandas as pd
my_list = [0, 1, 2, 3, 4, 1, 2, 3, 5]
>>> pd.Series(my_list).drop_duplicates().tolist()
# Output:
# [0, 1, 2, 3, 4, 5]
Not to kick a dead horse (this question is very old and already has lots of good answers), but here is a solution using pandas that is quite fast in many circumstances and is dead simple to use.
import pandas as pd
my_list = [0, 1, 2, 3, 4, 1, 2, 3, 5]
>>> pd.Series(my_list).drop_duplicates().tolist()
# Output:
# [0, 1, 2, 3, 4, 5]
回答 5
from itertools import groupby
[ key for key,_ in groupby(sortedList)]
该列表甚至不必排序,充分的条件是将相等的值分组在一起。
编辑:我认为“保留顺序”意味着该列表实际上是有序的。如果不是这种情况,那么MizardX的解决方案就是正确的解决方案。
社区编辑:但是,这是“将重复的连续元素压缩为单个元素”的最优雅的方法。
from itertools import groupby
[ key for key,_ in groupby(sortedList)]
The list doesn’t even have to be sorted, the sufficient condition is that equal values are grouped together.
Edit: I assumed that “preserving order” implies that the list is actually ordered. If this is not the case, then the solution from MizardX is the right one.
Community edit: This is however the most elegant way to “compress duplicate consecutive elements into a single element”.
回答 6
我想如果您想维持订单,
您可以尝试以下方法:
list1 = ['b','c','d','b','c','a','a']
list2 = list(set(list1))
list2.sort(key=list1.index)
print list2
或者类似地,您可以执行以下操作:
list1 = ['b','c','d','b','c','a','a']
list2 = sorted(set(list1),key=list1.index)
print list2
您也可以这样做:
list1 = ['b','c','d','b','c','a','a']
list2 = []
for i in list1:
if not i in list2:
list2.append(i)`
print list2
也可以这样写:
list1 = ['b','c','d','b','c','a','a']
list2 = []
[list2.append(i) for i in list1 if not i in list2]
print list2
I think if you wanna maintain the order,
you can try this:
list1 = ['b','c','d','b','c','a','a']
list2 = list(set(list1))
list2.sort(key=list1.index)
print list2
OR similarly you can do this:
list1 = ['b','c','d','b','c','a','a']
list2 = sorted(set(list1),key=list1.index)
print list2
You can also do this:
list1 = ['b','c','d','b','c','a','a']
list2 = []
for i in list1:
if not i in list2:
list2.append(i)`
print list2
It can also be written as this:
list1 = ['b','c','d','b','c','a','a']
list2 = []
[list2.append(i) for i in list1 if not i in list2]
print list2
回答 7
在Python 3.7及更高版本中,保证字典记住其键插入顺序。这个问题的答案总结了当前的状况。
OrderedDict
因此,该解决方案变得过时了,没有任何导入语句,我们可以简单地发出:
>>> lst = [1, 2, 1, 3, 3, 2, 4]
>>> list(dict.fromkeys(lst))
[1, 2, 3, 4]
In Python 3.7 and above, dictionaries are guaranteed to remember their key insertion order. The answer to this question summarizes the current state of affairs.
The OrderedDict
solution thus becomes obsolete and without any import statements we can simply issue:
>>> lst = [1, 2, 1, 3, 3, 2, 4]
>>> list(dict.fromkeys(lst))
[1, 2, 3, 4]
回答 8
对于另一个非常老的问题的另一个非常晚的答案:
该有做到这一点,利用函数seen
集技术,但是:
- 处理标准
key
功能。 - 不使用不雅观的骇客。
- 通过预绑定
seen.add
而不是查找N次来优化循环。(f7
也可以这样做,但某些版本则不能。) - 通过使用来优化循环
ifilterfalse
,因此您只需要遍历Python中的唯一元素,而不是所有元素。(ifilterfalse
当然,您仍然可以在内部遍历所有这些对象,但这是在C中,而且速度更快。)
实际上比它快f7
吗?这取决于您的数据,因此您必须对其进行测试并查看。如果您最后想要一个列表,请f7
使用listcomp,在这里没有办法做到这一点。(您可以直接append
代替yield
ing,也可以将生成器输入到list
函数中,但没有一个可以比listcomp内的LIST_APPEND快。)无论如何,通常挤出几微秒的时间不会像之所以重要,是因为它具有易于理解,可重用,已编写的功能,在您要装饰时不需要DSU。
与所有食谱一样,它也可以在中找到more-iterools
。
如果只希望没有key
条件,可以将其简化为:
def unique(iterable):
seen = set()
seen_add = seen.add
for element in itertools.ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
For another very late answer to another very old question:
The have a function that does this, using the seen
set technique, but:
- Handles a standard
key
function. - Uses no unseemly hacks.
- Optimizes the loop by pre-binding
seen.add
instead of looking it up N times. (f7
also does this, but some versions don’t.) - Optimizes the loop by using
ifilterfalse
, so you only have to loop over the unique elements in Python, instead of all of them. (You still iterate over all of them inside ifilterfalse
, of course, but that’s in C, and much faster.)
Is it actually faster than f7
? It depends on your data, so you’ll have to test it and see. If you want a list in the end, f7
uses a listcomp, and there’s no way to do that here. (You can directly append
instead of yield
ing, or you can feed the generator into the list
function, but neither one can be as fast as the LIST_APPEND inside a listcomp.) At any rate, usually, squeezing out a few microseconds is not going to be as important as having an easily-understandable, reusable, already-written function that doesn’t require DSU when you want to decorate.
As with all of the recipes, it’s also available in more-iterools
.
If you just want the no-key
case, you can simplify it as:
def unique(iterable):
seen = set()
seen_add = seen.add
for element in itertools.ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
回答 9
刚添加另一个(非常高性能的)实现的这样的功能从外部模块1:iteration_utilities.unique_everseen
:
>>> from iteration_utilities import unique_everseen
>>> lst = [1,1,1,2,3,2,2,2,1,3,4]
>>> list(unique_everseen(lst))
[1, 2, 3, 4]
时机
我做了一些计时(Python的3.6),这些表明,它比我测试的所有其他办法,包括更快OrderedDict.fromkeys
,f7
并且more_itertools.unique_everseen
:
%matplotlib notebook
from iteration_utilities import unique_everseen
from collections import OrderedDict
from more_itertools import unique_everseen as mi_unique_everseen
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
def iteration_utilities_unique_everseen(seq):
return list(unique_everseen(seq))
def more_itertools_unique_everseen(seq):
return list(mi_unique_everseen(seq))
def odict(seq):
return list(OrderedDict.fromkeys(seq))
from simple_benchmark import benchmark
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: list(range(2**i)) for i in range(1, 20)},
'list size (no duplicates)')
b.plot()
并确保我也进行了更多重复的测试,以检查是否有所不同:
import random
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(1, 20)},
'list size (lots of duplicates)')
b.plot()
还有一个仅包含一个值:
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [1]*(2**i) for i in range(1, 20)},
'list size (only duplicates)')
b.plot()
在所有这些情况下,该iteration_utilities.unique_everseen
功能都是最快的(在我的计算机上)。
此iteration_utilities.unique_everseen
函数还可以处理输入中不可散列的值(但是使用O(n*n)
性能而不是O(n)
可散列值时的性能)。
>>> lst = [{1}, {1}, {2}, {1}, {3}]
>>> list(unique_everseen(lst))
[{1}, {2}, {3}]
1免责声明:我是该软件包的作者。
Just to add another (very performant) implementation of such a functionality from an external module1: iteration_utilities.unique_everseen
:
>>> from iteration_utilities import unique_everseen
>>> lst = [1,1,1,2,3,2,2,2,1,3,4]
>>> list(unique_everseen(lst))
[1, 2, 3, 4]
Timings
I did some timings (Python 3.6) and these show that it’s faster than all other alternatives I tested, including OrderedDict.fromkeys
, f7
and more_itertools.unique_everseen
:
%matplotlib notebook
from iteration_utilities import unique_everseen
from collections import OrderedDict
from more_itertools import unique_everseen as mi_unique_everseen
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
def iteration_utilities_unique_everseen(seq):
return list(unique_everseen(seq))
def more_itertools_unique_everseen(seq):
return list(mi_unique_everseen(seq))
def odict(seq):
return list(OrderedDict.fromkeys(seq))
from simple_benchmark import benchmark
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: list(range(2**i)) for i in range(1, 20)},
'list size (no duplicates)')
b.plot()
And just to make sure I also did a test with more duplicates just to check if it makes a difference:
import random
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(1, 20)},
'list size (lots of duplicates)')
b.plot()
And one containing only one value:
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [1]*(2**i) for i in range(1, 20)},
'list size (only duplicates)')
b.plot()
In all of these cases the iteration_utilities.unique_everseen
function is the fastest (on my computer).
This iteration_utilities.unique_everseen
function can also handle unhashable values in the input (however with an O(n*n)
performance instead of the O(n)
performance when the values are hashable).
>>> lst = [{1}, {1}, {2}, {1}, {3}]
>>> list(unique_everseen(lst))
[{1}, {2}, {3}]
1 Disclaimer: I’m the author of that package.
回答 10
对于没有可散列的类型(例如列表列表),基于MizardX:
def f7_noHash(seq)
seen = set()
return [ x for x in seq if str( x ) not in seen and not seen.add( str( x ) )]
For no hashable types (e.g. list of lists), based on MizardX’s:
def f7_noHash(seq)
seen = set()
return [ x for x in seq if str( x ) not in seen and not seen.add( str( x ) )]
回答 11
借用在定义Haskell nub
函数的列表时使用的递归思想,这将是一种递归方法:
def unique(lst):
return [] if lst==[] else [lst[0]] + unique(filter(lambda x: x!= lst[0], lst[1:]))
例如:
In [118]: unique([1,5,1,1,4,3,4])
Out[118]: [1, 5, 4, 3]
我尝试使用它来增加数据量,并看到了次线性时间复杂度(不确定,但建议对于正常数据应该很好)。
In [122]: %timeit unique(np.random.randint(5, size=(1)))
10000 loops, best of 3: 25.3 us per loop
In [123]: %timeit unique(np.random.randint(5, size=(10)))
10000 loops, best of 3: 42.9 us per loop
In [124]: %timeit unique(np.random.randint(5, size=(100)))
10000 loops, best of 3: 132 us per loop
In [125]: %timeit unique(np.random.randint(5, size=(1000)))
1000 loops, best of 3: 1.05 ms per loop
In [126]: %timeit unique(np.random.randint(5, size=(10000)))
100 loops, best of 3: 11 ms per loop
我也认为很有趣的是,其他操作可以很容易地将其推广到唯一性。像这样:
import operator
def unique(lst, cmp_op=operator.ne):
return [] if lst==[] else [lst[0]] + unique(filter(lambda x: cmp_op(x, lst[0]), lst[1:]), cmp_op)
例如,您可以传入一个函数,该函数使用舍入的概念将其舍入为相同的整数,就好像它是“相等”是出于唯一性目的一样,如下所示:
def test_round(x,y):
return round(x) != round(y)
然后unique(some_list,test_round)将提供列表的唯一元素,其中唯一性不再意味着传统的相等性(这是通过使用任何基于集合或基于dict-key的方法来解决的),而是意味着对于每个可能舍入到的整数K,只有第一个舍入到K的元素,例如:
In [6]: unique([1.2, 5, 1.9, 1.1, 4.2, 3, 4.8], test_round)
Out[6]: [1.2, 5, 1.9, 4.2, 3]
Borrowing the recursive idea used in definining Haskell’s nub
function for lists, this would be a recursive approach:
def unique(lst):
return [] if lst==[] else [lst[0]] + unique(filter(lambda x: x!= lst[0], lst[1:]))
e.g.:
In [118]: unique([1,5,1,1,4,3,4])
Out[118]: [1, 5, 4, 3]
I tried it for growing data sizes and saw sub-linear time-complexity (not definitive, but suggests this should be fine for normal data).
In [122]: %timeit unique(np.random.randint(5, size=(1)))
10000 loops, best of 3: 25.3 us per loop
In [123]: %timeit unique(np.random.randint(5, size=(10)))
10000 loops, best of 3: 42.9 us per loop
In [124]: %timeit unique(np.random.randint(5, size=(100)))
10000 loops, best of 3: 132 us per loop
In [125]: %timeit unique(np.random.randint(5, size=(1000)))
1000 loops, best of 3: 1.05 ms per loop
In [126]: %timeit unique(np.random.randint(5, size=(10000)))
100 loops, best of 3: 11 ms per loop
I also think it’s interesting that this could be readily generalized to uniqueness by other operations. Like this:
import operator
def unique(lst, cmp_op=operator.ne):
return [] if lst==[] else [lst[0]] + unique(filter(lambda x: cmp_op(x, lst[0]), lst[1:]), cmp_op)
For example, you could pass in a function that uses the notion of rounding to the same integer as if it was “equality” for uniqueness purposes, like this:
def test_round(x,y):
return round(x) != round(y)
then unique(some_list, test_round) would provide the unique elements of the list where uniqueness no longer meant traditional equality (which is implied by using any sort of set-based or dict-key-based approach to this problem) but instead meant to take only the first element that rounds to K for each possible integer K that the elements might round to, e.g.:
In [6]: unique([1.2, 5, 1.9, 1.1, 4.2, 3, 4.8], test_round)
Out[6]: [1.2, 5, 1.9, 4.2, 3]
回答 12
5倍速减少变体但更复杂
>>> l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
说明:
default = (list(), set())
# use list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
>>> reduce(reducer, l, default)[0]
[5, 6, 1, 2, 3, 4]
5 x faster reduce variant but more sophisticated
>>> l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
Explanation:
default = (list(), set())
# use list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
>>> reduce(reducer, l, default)[0]
[5, 6, 1, 2, 3, 4]
回答 13
您可以引用列表理解,因为它是由符号“ _ [1]”构建的。
例如,以下函数通过引用元素列表理解来唯一化元素列表,而不更改其顺序。
def unique(my_list):
return [x for x in my_list if x not in locals()['_[1]']]
演示:
l1 = [1, 2, 3, 4, 1, 2, 3, 4, 5]
l2 = [x for x in l1 if x not in locals()['_[1]']]
print l2
输出:
[1, 2, 3, 4, 5]
You can reference a list comprehension as it is being built by the symbol ‘_[1]’.
For example, the following function unique-ifies a list of elements without changing their order by referencing its list comprehension.
def unique(my_list):
return [x for x in my_list if x not in locals()['_[1]']]
Demo:
l1 = [1, 2, 3, 4, 1, 2, 3, 4, 5]
l2 = [x for x in l1 if x not in locals()['_[1]']]
print l2
Output:
[1, 2, 3, 4, 5]
回答 14
MizardX的答案很好地总结了多种方法。
这是我在大声思考时想到的:
mylist = [x for i,x in enumerate(mylist) if x not in mylist[i+1:]]
MizardX’s answer gives a good collection of multiple approaches.
This is what I came up with while thinking aloud:
mylist = [x for i,x in enumerate(mylist) if x not in mylist[i+1:]]
回答 15
这是一种简单的方法:
list1 = ["hello", " ", "w", "o", "r", "l", "d"]
sorted(set(list1 ), key=lambda x:list1.index(x))
给出输出:
["hello", " ", "w", "o", "r", "l", "d"]
here is a simple way to do it:
list1 = ["hello", " ", "w", "o", "r", "l", "d"]
sorted(set(list1 ), key=lambda x:list1.index(x))
that gives the output:
["hello", " ", "w", "o", "r", "l", "d"]
回答 16
您可以进行某种丑陋的列表理解技巧。
[l[i] for i in range(len(l)) if l.index(l[i]) == i]
You could do a sort of ugly list comprehension hack.
[l[i] for i in range(len(l)) if l.index(l[i]) == i]
回答 17
相对有效_sorted_
的numpy
数组方法:
b = np.array([1,3,3, 8, 12, 12,12])
numpy.hstack([b[0], [x[0] for x in zip(b[1:], b[:-1]) if x[0]!=x[1]]])
输出:
array([ 1, 3, 8, 12])
Relatively effective approach with _sorted_
a numpy
arrays:
b = np.array([1,3,3, 8, 12, 12,12])
numpy.hstack([b[0], [x[0] for x in zip(b[1:], b[:-1]) if x[0]!=x[1]]])
Outputs:
array([ 1, 3, 8, 12])
回答 18
l = [1,2,2,3,3,...]
n = []
n.extend(ele for ele in l if ele not in set(n))
生成器表达式使用集合的O(1)查找来确定是否在新列表中包括元素。
l = [1,2,2,3,3,...]
n = []
n.extend(ele for ele in l if ele not in set(n))
A generator expression that uses the O(1) look up of a set to determine whether or not to include an element in the new list.
回答 19
一个简单的递归解决方案:
def uniquefy_list(a):
return uniquefy_list(a[1:]) if a[0] in a[1:] else [a[0]]+uniquefy_list(a[1:]) if len(a)>1 else [a[0]]
A simple recursive solution:
def uniquefy_list(a):
return uniquefy_list(a[1:]) if a[0] in a[1:] else [a[0]]+uniquefy_list(a[1:]) if len(a)>1 else [a[0]]
回答 20
消除序列中的重复值,但保留其余项目的顺序。使用通用发生器功能。
# for hashable sequence
def remove_duplicates(items):
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(remove_duplicates(a))
# [1, 5, 2, 9, 10]
# for unhashable sequence
def remove_duplicates(items, key=None):
seen = set()
for item in items:
val = item if key is None else key(item)
if val not in seen:
yield item
seen.add(val)
a = [ {'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 1, 'y': 2}, {'x': 2, 'y': 4}]
list(remove_duplicates(a, key=lambda d: (d['x'],d['y'])))
# [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}]
Eliminating the duplicate values in a sequence, but preserve the order of the remaining items. Use of general purpose generator function.
# for hashable sequence
def remove_duplicates(items):
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(remove_duplicates(a))
# [1, 5, 2, 9, 10]
# for unhashable sequence
def remove_duplicates(items, key=None):
seen = set()
for item in items:
val = item if key is None else key(item)
if val not in seen:
yield item
seen.add(val)
a = [ {'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 1, 'y': 2}, {'x': 2, 'y': 4}]
list(remove_duplicates(a, key=lambda d: (d['x'],d['y'])))
# [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}]
回答 21
如果您需要一支班轮,那么这可能会有所帮助:
reduce(lambda x, y: x + y if y[0] not in x else x, map(lambda x: [x],lst))
…应该可以,但是如果我错了,请纠正我
If you need one liner then maybe this would help:
reduce(lambda x, y: x + y if y[0] not in x else x, map(lambda x: [x],lst))
… should work but correct me if i’m wrong
回答 22
如果您经常使用,并且美观优先于性能,那么请考虑内置功能pandas.Series.drop_duplicates
:
import pandas as pd
import numpy as np
uniquifier = lambda alist: pd.Series(alist).drop_duplicates().tolist()
# from the chosen answer
def f7(seq):
seen = set()
seen_add = seen.add
return [ x for x in seq if not (x in seen or seen_add(x))]
alist = np.random.randint(low=0, high=1000, size=10000).tolist()
print uniquifier(alist) == f7(alist) # True
定时:
In [104]: %timeit f7(alist)
1000 loops, best of 3: 1.3 ms per loop
In [110]: %timeit uniquifier(alist)
100 loops, best of 3: 4.39 ms per loop
If you routinely use , and aesthetics is preferred over performance, then consider the built-in function pandas.Series.drop_duplicates
:
import pandas as pd
import numpy as np
uniquifier = lambda alist: pd.Series(alist).drop_duplicates().tolist()
# from the chosen answer
def f7(seq):
seen = set()
seen_add = seen.add
return [ x for x in seq if not (x in seen or seen_add(x))]
alist = np.random.randint(low=0, high=1000, size=10000).tolist()
print uniquifier(alist) == f7(alist) # True
Timing:
In [104]: %timeit f7(alist)
1000 loops, best of 3: 1.3 ms per loop
In [110]: %timeit uniquifier(alist)
100 loops, best of 3: 4.39 ms per loop
回答 23
这将保留订单并在O(n)时间运行。基本上,这个想法是在发现重复的地方创建一个孔,并将其下沉到底部。利用读写指针。每当发现重复项时,只有读指针前进,写指针停留在重复项上以覆盖它。
def deduplicate(l):
count = {}
(read,write) = (0,0)
while read < len(l):
if l[read] in count:
read += 1
continue
count[l[read]] = True
l[write] = l[read]
read += 1
write += 1
return l[0:write]
this will preserve order and run in O(n) time. basically the idea is to create a hole wherever there is a duplicate found and sink it down to the bottom. makes use of a read and write pointer. whenever a duplicate is found only the read pointer advances and write pointer stays on the duplicate entry to overwrite it.
def deduplicate(l):
count = {}
(read,write) = (0,0)
while read < len(l):
if l[read] in count:
read += 1
continue
count[l[read]] = True
l[write] = l[read]
read += 1
write += 1
return l[0:write]
回答 24
不使用导入的模块或集的解决方案:
text = "ask not what your country can do for you ask what you can do for your country"
sentence = text.split(" ")
noduplicates = [(sentence[i]) for i in range (0,len(sentence)) if sentence[i] not in sentence[:i]]
print(noduplicates)
给出输出:
['ask', 'not', 'what', 'your', 'country', 'can', 'do', 'for', 'you']
A solution without using imported modules or sets:
text = "ask not what your country can do for you ask what you can do for your country"
sentence = text.split(" ")
noduplicates = [(sentence[i]) for i in range (0,len(sentence)) if sentence[i] not in sentence[:i]]
print(noduplicates)
Gives output:
['ask', 'not', 'what', 'your', 'country', 'can', 'do', 'for', 'you']
回答 25
就地方法
此方法是二次方的,因为我们对列表的每个元素都有一个线性查找列表(由于存在以下原因,我们必须增加重新排列列表的成本) del
)。
就是说,如果我们从列表的末尾开始,朝着原点前进,删除子列表左侧的每个术语,就有可能进行适当的操作
代码中的这个想法很简单
for i in range(len(l)-1,0,-1):
if l[i] in l[:i]: del l[i]
实施的简单测试
In [91]: from random import randint, seed
In [92]: seed('20080808') ; l = [randint(1,6) for _ in range(12)] # Beijing Olympics
In [93]: for i in range(len(l)-1,0,-1):
...: print(l)
...: print(i, l[i], l[:i], end='')
...: if l[i] in l[:i]:
...: print( ': remove', l[i])
...: del l[i]
...: else:
...: print()
...: print(l)
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5, 2]
11 2 [6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5]: remove 2
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5]
10 5 [6, 5, 1, 4, 6, 1, 6, 2, 2, 4]: remove 5
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4]
9 4 [6, 5, 1, 4, 6, 1, 6, 2, 2]: remove 4
[6, 5, 1, 4, 6, 1, 6, 2, 2]
8 2 [6, 5, 1, 4, 6, 1, 6, 2]: remove 2
[6, 5, 1, 4, 6, 1, 6, 2]
7 2 [6, 5, 1, 4, 6, 1, 6]
[6, 5, 1, 4, 6, 1, 6, 2]
6 6 [6, 5, 1, 4, 6, 1]: remove 6
[6, 5, 1, 4, 6, 1, 2]
5 1 [6, 5, 1, 4, 6]: remove 1
[6, 5, 1, 4, 6, 2]
4 6 [6, 5, 1, 4]: remove 6
[6, 5, 1, 4, 2]
3 4 [6, 5, 1]
[6, 5, 1, 4, 2]
2 1 [6, 5]
[6, 5, 1, 4, 2]
1 5 [6]
[6, 5, 1, 4, 2]
In [94]:
An in-place method
This method is quadratic, because we have a linear lookup into the list for every element of the list (to that we have to add the cost of rearranging the list because of the del
s).
That said, it is possible to operate in place if we start from the end of the list and proceed toward the origin removing each term that is present in the sub-list at its left
This idea in code is simply
for i in range(len(l)-1,0,-1):
if l[i] in l[:i]: del l[i]
A simple test of the implementation
In [91]: from random import randint, seed
In [92]: seed('20080808') ; l = [randint(1,6) for _ in range(12)] # Beijing Olympics
In [93]: for i in range(len(l)-1,0,-1):
...: print(l)
...: print(i, l[i], l[:i], end='')
...: if l[i] in l[:i]:
...: print( ': remove', l[i])
...: del l[i]
...: else:
...: print()
...: print(l)
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5, 2]
11 2 [6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5]: remove 2
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5]
10 5 [6, 5, 1, 4, 6, 1, 6, 2, 2, 4]: remove 5
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4]
9 4 [6, 5, 1, 4, 6, 1, 6, 2, 2]: remove 4
[6, 5, 1, 4, 6, 1, 6, 2, 2]
8 2 [6, 5, 1, 4, 6, 1, 6, 2]: remove 2
[6, 5, 1, 4, 6, 1, 6, 2]
7 2 [6, 5, 1, 4, 6, 1, 6]
[6, 5, 1, 4, 6, 1, 6, 2]
6 6 [6, 5, 1, 4, 6, 1]: remove 6
[6, 5, 1, 4, 6, 1, 2]
5 1 [6, 5, 1, 4, 6]: remove 1
[6, 5, 1, 4, 6, 2]
4 6 [6, 5, 1, 4]: remove 6
[6, 5, 1, 4, 2]
3 4 [6, 5, 1]
[6, 5, 1, 4, 2]
2 1 [6, 5]
[6, 5, 1, 4, 2]
1 5 [6]
[6, 5, 1, 4, 2]
In [94]:
回答 26
zmk的方法使用列表理解,它非常快,但自然保持顺序。对于区分大小写的字符串,可以轻松对其进行修改。这也保留了原始情况。
def DelDupes(aseq) :
seen = set()
return [x for x in aseq if (x.lower() not in seen) and (not seen.add(x.lower()))]
紧密相关的功能是:
def HasDupes(aseq) :
s = set()
return any(((x.lower() in s) or s.add(x.lower())) for x in aseq)
def GetDupes(aseq) :
s = set()
return set(x for x in aseq if ((x.lower() in s) or s.add(x.lower())))
zmk’s approach uses list comprehension which is very fast, yet keeps the order naturally. For applying to case sensitive strings it can be easily modified. This also preserves the original case.
def DelDupes(aseq) :
seen = set()
return [x for x in aseq if (x.lower() not in seen) and (not seen.add(x.lower()))]
Closely associated functions are:
def HasDupes(aseq) :
s = set()
return any(((x.lower() in s) or s.add(x.lower())) for x in aseq)
def GetDupes(aseq) :
s = set()
return set(x for x in aseq if ((x.lower() in s) or s.add(x.lower())))
回答 27
熊猫用户应退房pandas.unique
。
>>> import pandas as pd
>>> lst = [1, 2, 1, 3, 3, 2, 4]
>>> pd.unique(lst)
array([1, 2, 3, 4])
该函数返回一个NumPy数组。如果需要,可以使用tolist
方法将其转换为列表。
pandas users should check out pandas.unique
.
>>> import pandas as pd
>>> lst = [1, 2, 1, 3, 3, 2, 4]
>>> pd.unique(lst)
array([1, 2, 3, 4])
The function returns a NumPy array. If needed, you can convert it to a list with the tolist
method.
回答 28
一种班轮清单理解:
values_non_duplicated = [value for index, value in enumerate(values) if value not in values[ : index]]
只需添加一个条件以检查该值不在先前位置
One liner list comprehension:
values_non_duplicated = [value for index, value in enumerate(values) if value not in values[ : index]]
Simply add a conditional to check that value is not on a previous position
声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。