如何检查平面清单中是否存在重复项?

问题:如何检查平面清单中是否存在重复项?

例如,给定列表['one', 'two', 'one'],算法应返回True,而给定['one', 'two', 'three']应返回False

For example, given the list ['one', 'two', 'one'], the algorithm should return True, whereas given ['one', 'two', 'three'] it should return False.


回答 0

使用set()删除重复,如果所有的值是可哈希

>>> your_list = ['one', 'two', 'one']
>>> len(your_list) != len(set(your_list))
True

Use set() to remove duplicates if all values are hashable:

>>> your_list = ['one', 'two', 'one']
>>> len(your_list) != len(set(your_list))
True

回答 1

仅推荐用于名单:

any(thelist.count(x) > 1 for x in thelist)

难道不是一个长长的清单上使用-它可以采取的时间与列表中的项目的数量!

对于带有可哈希项(字符串,数字和&c)的较长列表:

def anydup(thelist):
  seen = set()
  for x in thelist:
    if x in seen: return True
    seen.add(x)
  return False

如果您的项目不可散列(子列表,字典等),则它会变得更毛茸茸,尽管如果它们至少具有可比性,仍然可能会获得O(N logN)。但是您需要了解或测试项目的特性(是否可哈希,是否可比较)才能获得最佳性能-哈希值为O(N),非哈希值可比较项为O(N log N),否则它下降到O(N平方),对此无能为力:-(。

Recommended for short lists only:

any(thelist.count(x) > 1 for x in thelist)

Do not use on a long list — it can take time proportional to the square of the number of items in the list!

For longer lists with hashable items (strings, numbers, &c):

def anydup(thelist):
  seen = set()
  for x in thelist:
    if x in seen: return True
    seen.add(x)
  return False

If your items are not hashable (sublists, dicts, etc) it gets hairier, though it may still be possible to get O(N logN) if they’re at least comparable. But you need to know or test the characteristics of the items (hashable or not, comparable or not) to get the best performance you can — O(N) for hashables, O(N log N) for non-hashable comparables, otherwise it’s down to O(N squared) and there’s nothing one can do about it:-(.


回答 2

这很古老,但是这里的答案使我得出了一个略有不同的解决方案。如果您想滥用理解力,则可以通过这种方式进行短路。

xs = [1, 2, 1]
s = set()
any(x in s or s.add(x) for x in xs)
# You can use a similar approach to actually retrieve the duplicates.
s = set()
duplicates = set(x for x in xs if x in s or s.add(x))

This is old, but the answers here led me to a slightly different solution. If you are up for abusing comprehensions, you can get short-circuiting this way.

xs = [1, 2, 1]
s = set()
any(x in s or s.add(x) for x in xs)
# You can use a similar approach to actually retrieve the duplicates.
s = set()
duplicates = set(x for x in xs if x in s or s.add(x))

回答 3

如果您喜欢函数式编程风格,那么这里是一个有用的函数,可以使用doctest进行自我记录和测试。

def decompose(a_list):
    """Turns a list into a set of all elements and a set of duplicated elements.

    Returns a pair of sets. The first one contains elements
    that are found at least once in the list. The second one
    contains elements that appear more than once.

    >>> decompose([1,2,3,5,3,2,6])
    (set([1, 2, 3, 5, 6]), set([2, 3]))
    """
    return reduce(
        lambda (u, d), o : (u.union([o]), d.union(u.intersection([o]))),
        a_list,
        (set(), set()))

if __name__ == "__main__":
    import doctest
    doctest.testmod()

在这里,您可以通过检查返回的对中的第二个元素是否为空来测试唯一性:

def is_set(l):
    """Test if there is no duplicate element in l.

    >>> is_set([1,2,3])
    True
    >>> is_set([1,2,1])
    False
    >>> is_set([])
    True
    """
    return not decompose(l)[1]

请注意,这是无效的,因为您正在显式构造分解。但是,在使用reduce的过程中,您可以得出相当于5(但效率稍低)的答案5:

def is_set(l):
    try:
        def func(s, o):
            if o in s:
                raise Exception
            return s.union([o])
        reduce(func, l, set())
        return True
    except:
        return False

If you are fond of functional programming style, here is a useful function, self-documented and tested code using doctest.

def decompose(a_list):
    """Turns a list into a set of all elements and a set of duplicated elements.

    Returns a pair of sets. The first one contains elements
    that are found at least once in the list. The second one
    contains elements that appear more than once.

    >>> decompose([1,2,3,5,3,2,6])
    (set([1, 2, 3, 5, 6]), set([2, 3]))
    """
    return reduce(
        lambda (u, d), o : (u.union([o]), d.union(u.intersection([o]))),
        a_list,
        (set(), set()))

if __name__ == "__main__":
    import doctest
    doctest.testmod()

From there you can test unicity by checking whether the second element of the returned pair is empty:

def is_set(l):
    """Test if there is no duplicate element in l.

    >>> is_set([1,2,3])
    True
    >>> is_set([1,2,1])
    False
    >>> is_set([])
    True
    """
    return not decompose(l)[1]

Note that this is not efficient since you are explicitly constructing the decomposition. But along the line of using reduce, you can come up to something equivalent (but slightly less efficient) to answer 5:

def is_set(l):
    try:
        def func(s, o):
            if o in s:
                raise Exception
            return s.union([o])
        reduce(func, l, set())
        return True
    except:
        return False

回答 4

我认为比较此处介绍的不同解决方案的时间会很有用。为此,我使用了自己的库simple_benchmark

因此,确实对于这种情况,Denis Otkidach的解决方案是最快的。

其中一些方法还显示出更陡峭的曲线,这些方法随元素数量呈二次方缩放(Alex Martellis第一解决方案,wjandrea和Xavier Decorets解决方案)。值得一提的是,Keiku提供的熊猫解决方案具有很大的常数。但是对于较大的列表,它几乎赶上了其他解决方案。

并且如果副本在第一位置。这对于查看哪些解决方案短路很有用:

这里有几种方法不会短路:Kaiku,Frank,Xavier_Decoret(第一个解决方案),Turn,Alex Martelli(第一个解决方案)以及Denis Otkidach提出的方法(在无重复情况下最快)。

我在自己的库中包含了一个函数: iteration_utilities.all_distinct在没有重复的情况下,它可以与最快的解决方案竞争,并且可以在开始重复的情况下以恒定的时间执行(尽管速度不是最快)。

基准测试代码:

from collections import Counter
from functools import reduce

import pandas as pd
from simple_benchmark import BenchmarkBuilder
from iteration_utilities import all_distinct

b = BenchmarkBuilder()

@b.add_function()
def Keiku(l):
    return pd.Series(l).duplicated().sum() > 0

@b.add_function()
def Frank(num_list):
    unique = []
    dupes = []
    for i in num_list:
        if i not in unique:
            unique.append(i)
        else:
            dupes.append(i)
    if len(dupes) != 0:
        return False
    else:
        return True

@b.add_function()
def wjandrea(iterable):
    seen = []
    for x in iterable:
        if x in seen:
            return True
        seen.append(x)
    return False

@b.add_function()
def user(iterable):
    clean_elements_set = set()
    clean_elements_set_add = clean_elements_set.add

    for possible_duplicate_element in iterable:

        if possible_duplicate_element in clean_elements_set:
            return True

        else:
            clean_elements_set_add( possible_duplicate_element )

    return False

@b.add_function()
def Turn(l):
    return Counter(l).most_common()[0][1] > 1

def getDupes(l):
    seen = set()
    seen_add = seen.add
    for x in l:
        if x in seen or seen_add(x):
            yield x

@b.add_function()          
def F1Rumors(l):
    try:
        if next(getDupes(l)): return True    # Found a dupe
    except StopIteration:
        pass
    return False

def decompose(a_list):
    return reduce(
        lambda u, o : (u[0].union([o]), u[1].union(u[0].intersection([o]))),
        a_list,
        (set(), set()))

@b.add_function()
def Xavier_Decoret_1(l):
    return not decompose(l)[1]

@b.add_function()
def Xavier_Decoret_2(l):
    try:
        def func(s, o):
            if o in s:
                raise Exception
            return s.union([o])
        reduce(func, l, set())
        return True
    except:
        return False

@b.add_function()
def pyrospade(xs):
    s = set()
    return any(x in s or s.add(x) for x in xs)

@b.add_function()
def Alex_Martelli_1(thelist):
    return any(thelist.count(x) > 1 for x in thelist)

@b.add_function()
def Alex_Martelli_2(thelist):
    seen = set()
    for x in thelist:
        if x in seen: return True
        seen.add(x)
    return False

@b.add_function()
def Denis_Otkidach(your_list):
    return len(your_list) != len(set(your_list))

@b.add_function()
def MSeifert04(l):
    return not all_distinct(l)

对于参数:


# No duplicate run
@b.add_arguments('list size')
def arguments():
    for exp in range(2, 14):
        size = 2**exp
        yield size, list(range(size))

# Duplicate at beginning run
@b.add_arguments('list size')
def arguments():
    for exp in range(2, 14):
        size = 2**exp
        yield size, [0, *list(range(size)]

# Running and plotting
r = b.run()
r.plot()

I thought it would be useful to compare the timings of the different solutions presented here. For this I used my own library simple_benchmark:

So indeed for this case the solution from Denis Otkidach is fastest.

Some of the approaches also exhibit a much steeper curve, these are the approaches that scale quadratic with the number of elements (Alex Martellis first solution, wjandrea and both of Xavier Decorets solutions). Also important to mention is that the pandas solution from Keiku has a very big constant factor. But for larger lists it almost catches up with the other solutions.

And in case the duplicate is at the first position. This is useful to see which solutions are short-circuiting:

Here several approaches don’t short-circuit: Kaiku, Frank, Xavier_Decoret (first solution), Turn, Alex Martelli (first solution) and the approach presented by Denis Otkidach (which was fastest in the no-duplicate case).

I included a function from my own library here: iteration_utilities.all_distinct which can compete with the fastest solution in the no-duplicates case and performs in constant-time for the duplicate-at-begin case (although not as fastest).

The code for the benchmark:

from collections import Counter
from functools import reduce

import pandas as pd
from simple_benchmark import BenchmarkBuilder
from iteration_utilities import all_distinct

b = BenchmarkBuilder()

@b.add_function()
def Keiku(l):
    return pd.Series(l).duplicated().sum() > 0

@b.add_function()
def Frank(num_list):
    unique = []
    dupes = []
    for i in num_list:
        if i not in unique:
            unique.append(i)
        else:
            dupes.append(i)
    if len(dupes) != 0:
        return False
    else:
        return True

@b.add_function()
def wjandrea(iterable):
    seen = []
    for x in iterable:
        if x in seen:
            return True
        seen.append(x)
    return False

@b.add_function()
def user(iterable):
    clean_elements_set = set()
    clean_elements_set_add = clean_elements_set.add

    for possible_duplicate_element in iterable:

        if possible_duplicate_element in clean_elements_set:
            return True

        else:
            clean_elements_set_add( possible_duplicate_element )

    return False

@b.add_function()
def Turn(l):
    return Counter(l).most_common()[0][1] > 1

def getDupes(l):
    seen = set()
    seen_add = seen.add
    for x in l:
        if x in seen or seen_add(x):
            yield x

@b.add_function()          
def F1Rumors(l):
    try:
        if next(getDupes(l)): return True    # Found a dupe
    except StopIteration:
        pass
    return False

def decompose(a_list):
    return reduce(
        lambda u, o : (u[0].union([o]), u[1].union(u[0].intersection([o]))),
        a_list,
        (set(), set()))

@b.add_function()
def Xavier_Decoret_1(l):
    return not decompose(l)[1]

@b.add_function()
def Xavier_Decoret_2(l):
    try:
        def func(s, o):
            if o in s:
                raise Exception
            return s.union([o])
        reduce(func, l, set())
        return True
    except:
        return False

@b.add_function()
def pyrospade(xs):
    s = set()
    return any(x in s or s.add(x) for x in xs)

@b.add_function()
def Alex_Martelli_1(thelist):
    return any(thelist.count(x) > 1 for x in thelist)

@b.add_function()
def Alex_Martelli_2(thelist):
    seen = set()
    for x in thelist:
        if x in seen: return True
        seen.add(x)
    return False

@b.add_function()
def Denis_Otkidach(your_list):
    return len(your_list) != len(set(your_list))

@b.add_function()
def MSeifert04(l):
    return not all_distinct(l)

And for the arguments:


# No duplicate run
@b.add_arguments('list size')
def arguments():
    for exp in range(2, 14):
        size = 2**exp
        yield size, list(range(size))

# Duplicate at beginning run
@b.add_arguments('list size')
def arguments():
    for exp in range(2, 14):
        size = 2**exp
        yield size, [0, *list(range(size)]

# Running and plotting
r = b.run()
r.plot()

回答 5

我最近回答了一个相关问题,以建立所有重复项,使用生成器在列表中。它的优势在于,如果仅用于建立“是否存在重复项”,则只需要获取第一项,其余项就可以忽略,这是最终的捷径。

这是一个有趣的基于集合的方法,我直接从moooeeeep改编而成

def getDupes(l):
    seen = set()
    seen_add = seen.add
    for x in l:
        if x in seen or seen_add(x):
            yield x

因此,将有一份完整的受骗名单list(getDupes(etc))。为了简单地测试“是否”存在欺骗,应将其包装如下:

def hasDupes(l):
    try:
        if getDupes(l).next(): return True    # Found a dupe
    except StopIteration:
        pass
    return False

这样可以很好地扩展规模,并在列表中的任何位置都提供一致的操作时间-我测试了最多100万个条目的列表。如果您对数据有所了解,特别是在上半年可能会出现重复数据,或者是其他使您歪曲要求的事情(例如需要获取实际的重复数据),那么有几个真正可以替代的重复数据定位器可能跑赢大市。我推荐的两个是…

基于简单dict的方法,可读性强:

def getDupes(c):
    d = {}
    for i in c:
        if i in d:
            if d[i]:
                yield i
                d[i] = False
        else:
            d[i] = True

利用排序列表上的itertools(本质上是ifilter / izip / tee),如果您要获取所有重复对象,则效率非常高,尽管获取第一个对象并没有那么快:

def getDupes(c):
    a, b = itertools.tee(sorted(c))
    next(b, None)
    r = None
    for k, g in itertools.ifilter(lambda x: x[0]==x[1], itertools.izip(a, b)):
        if k != r:
            yield k
            r = k

这些是我尝试使用完整重复列表的方法中表现最好的,第一个重复出现在从开始到中间的1m元素列表中的任何位置。令人惊讶的是,排序步骤增加了很少的开销。您的里程可能会有所不同,但这是我的具体计时结果:

Finding FIRST duplicate, single dupe places "n" elements in to 1m element array

Test set len change :        50 -  . . . . .  -- 0.002
Test in dict        :        50 -  . . . . .  -- 0.002
Test in set         :        50 -  . . . . .  -- 0.002
Test sort/adjacent  :        50 -  . . . . .  -- 0.023
Test sort/groupby   :        50 -  . . . . .  -- 0.026
Test sort/zip       :        50 -  . . . . .  -- 1.102
Test sort/izip      :        50 -  . . . . .  -- 0.035
Test sort/tee/izip  :        50 -  . . . . .  -- 0.024
Test moooeeeep      :        50 -  . . . . .  -- 0.001 *
Test iter*/sorted   :        50 -  . . . . .  -- 0.027

Test set len change :      5000 -  . . . . .  -- 0.017
Test in dict        :      5000 -  . . . . .  -- 0.003 *
Test in set         :      5000 -  . . . . .  -- 0.004
Test sort/adjacent  :      5000 -  . . . . .  -- 0.031
Test sort/groupby   :      5000 -  . . . . .  -- 0.035
Test sort/zip       :      5000 -  . . . . .  -- 1.080
Test sort/izip      :      5000 -  . . . . .  -- 0.043
Test sort/tee/izip  :      5000 -  . . . . .  -- 0.031
Test moooeeeep      :      5000 -  . . . . .  -- 0.003 *
Test iter*/sorted   :      5000 -  . . . . .  -- 0.031

Test set len change :     50000 -  . . . . .  -- 0.035
Test in dict        :     50000 -  . . . . .  -- 0.023
Test in set         :     50000 -  . . . . .  -- 0.023
Test sort/adjacent  :     50000 -  . . . . .  -- 0.036
Test sort/groupby   :     50000 -  . . . . .  -- 0.134
Test sort/zip       :     50000 -  . . . . .  -- 1.121
Test sort/izip      :     50000 -  . . . . .  -- 0.054
Test sort/tee/izip  :     50000 -  . . . . .  -- 0.045
Test moooeeeep      :     50000 -  . . . . .  -- 0.019 *
Test iter*/sorted   :     50000 -  . . . . .  -- 0.055

Test set len change :    500000 -  . . . . .  -- 0.249
Test in dict        :    500000 -  . . . . .  -- 0.145
Test in set         :    500000 -  . . . . .  -- 0.165
Test sort/adjacent  :    500000 -  . . . . .  -- 0.139
Test sort/groupby   :    500000 -  . . . . .  -- 1.138
Test sort/zip       :    500000 -  . . . . .  -- 1.159
Test sort/izip      :    500000 -  . . . . .  -- 0.126
Test sort/tee/izip  :    500000 -  . . . . .  -- 0.120 *
Test moooeeeep      :    500000 -  . . . . .  -- 0.131
Test iter*/sorted   :    500000 -  . . . . .  -- 0.157

I recently answered a related question to establish all the duplicates in a list, using a generator. It has the advantage that if used just to establish ‘if there is a duplicate’ then you just need to get the first item and the rest can be ignored, which is the ultimate shortcut.

This is an interesting set based approach I adapted straight from moooeeeep:

def getDupes(l):
    seen = set()
    seen_add = seen.add
    for x in l:
        if x in seen or seen_add(x):
            yield x

Accordingly, a full list of dupes would be list(getDupes(etc)). To simply test “if” there is a dupe, it should be wrapped as follows:

def hasDupes(l):
    try:
        if getDupes(l).next(): return True    # Found a dupe
    except StopIteration:
        pass
    return False

This scales well and provides consistent operating times wherever the dupe is in the list — I tested with lists of up to 1m entries. If you know something about the data, specifically, that dupes are likely to show up in the first half, or other things that let you skew your requirements, like needing to get the actual dupes, then there are a couple of really alternative dupe locators that might outperform. The two I recommend are…

Simple dict based approach, very readable:

def getDupes(c):
    d = {}
    for i in c:
        if i in d:
            if d[i]:
                yield i
                d[i] = False
        else:
            d[i] = True

Leverage itertools (essentially an ifilter/izip/tee) on the sorted list, very efficient if you are getting all the dupes though not as quick to get just the first:

def getDupes(c):
    a, b = itertools.tee(sorted(c))
    next(b, None)
    r = None
    for k, g in itertools.ifilter(lambda x: x[0]==x[1], itertools.izip(a, b)):
        if k != r:
            yield k
            r = k

These were the top performers from the approaches I tried for the full dupe list, with the first dupe occurring anywhere in a 1m element list from the start to the middle. It was surprising how little overhead the sort step added. Your mileage may vary, but here are my specific timed results:

Finding FIRST duplicate, single dupe places "n" elements in to 1m element array

Test set len change :        50 -  . . . . .  -- 0.002
Test in dict        :        50 -  . . . . .  -- 0.002
Test in set         :        50 -  . . . . .  -- 0.002
Test sort/adjacent  :        50 -  . . . . .  -- 0.023
Test sort/groupby   :        50 -  . . . . .  -- 0.026
Test sort/zip       :        50 -  . . . . .  -- 1.102
Test sort/izip      :        50 -  . . . . .  -- 0.035
Test sort/tee/izip  :        50 -  . . . . .  -- 0.024
Test moooeeeep      :        50 -  . . . . .  -- 0.001 *
Test iter*/sorted   :        50 -  . . . . .  -- 0.027

Test set len change :      5000 -  . . . . .  -- 0.017
Test in dict        :      5000 -  . . . . .  -- 0.003 *
Test in set         :      5000 -  . . . . .  -- 0.004
Test sort/adjacent  :      5000 -  . . . . .  -- 0.031
Test sort/groupby   :      5000 -  . . . . .  -- 0.035
Test sort/zip       :      5000 -  . . . . .  -- 1.080
Test sort/izip      :      5000 -  . . . . .  -- 0.043
Test sort/tee/izip  :      5000 -  . . . . .  -- 0.031
Test moooeeeep      :      5000 -  . . . . .  -- 0.003 *
Test iter*/sorted   :      5000 -  . . . . .  -- 0.031

Test set len change :     50000 -  . . . . .  -- 0.035
Test in dict        :     50000 -  . . . . .  -- 0.023
Test in set         :     50000 -  . . . . .  -- 0.023
Test sort/adjacent  :     50000 -  . . . . .  -- 0.036
Test sort/groupby   :     50000 -  . . . . .  -- 0.134
Test sort/zip       :     50000 -  . . . . .  -- 1.121
Test sort/izip      :     50000 -  . . . . .  -- 0.054
Test sort/tee/izip  :     50000 -  . . . . .  -- 0.045
Test moooeeeep      :     50000 -  . . . . .  -- 0.019 *
Test iter*/sorted   :     50000 -  . . . . .  -- 0.055

Test set len change :    500000 -  . . . . .  -- 0.249
Test in dict        :    500000 -  . . . . .  -- 0.145
Test in set         :    500000 -  . . . . .  -- 0.165
Test sort/adjacent  :    500000 -  . . . . .  -- 0.139
Test sort/groupby   :    500000 -  . . . . .  -- 1.138
Test sort/zip       :    500000 -  . . . . .  -- 1.159
Test sort/izip      :    500000 -  . . . . .  -- 0.126
Test sort/tee/izip  :    500000 -  . . . . .  -- 0.120 *
Test moooeeeep      :    500000 -  . . . . .  -- 0.131
Test iter*/sorted   :    500000 -  . . . . .  -- 0.157

回答 6

另一种简洁的方法是使用Counter

要确定原始列表中是否有重复项:

from collections import Counter

def has_dupes(l):
    # second element of the tuple has number of repetitions
    return Counter(l).most_common()[0][1] > 1

或获取具有重复项的列表:

def get_dupes(l):
    return [k for k, v in Counter(l).items() if v > 1]

Another way of doing this succinctly is with Counter.

To just determine if there are any duplicates in the original list:

from collections import Counter

def has_dupes(l):
    # second element of the tuple has number of repetitions
    return Counter(l).most_common()[0][1] > 1

Or to get a list of items that have duplicates:

def get_dupes(l):
    return [k for k, v in Counter(l).items() if v > 1]

回答 7

my_list = ['one', 'two', 'one']

duplicates = []

for value in my_list:
  if my_list.count(value) > 1:
    if value not in duplicates:
      duplicates.append(value)

print(duplicates) //["one"]
my_list = ['one', 'two', 'one']

duplicates = []

for value in my_list:
  if my_list.count(value) > 1:
    if value not in duplicates:
      duplicates.append(value)

print(duplicates) //["one"]

回答 8

我发现这是最好的性能,因为它会在发现第一个重复项时使操作短路,因此该算法具有时间和空间复杂度O(n),其中n是列表的长度:

def has_duplicated_elements(iterable):
    """ Given an `iterable`, return True if there are duplicated entries. """
    clean_elements_set = set()
    clean_elements_set_add = clean_elements_set.add

    for possible_duplicate_element in iterable:

        if possible_duplicate_element in clean_elements_set:
            return True

        else:
            clean_elements_set_add( possible_duplicate_element )

    return False

I found this to do the best performance because it short-circuit the operation when the first duplicated it found, then this algorithm has time and space complexity O(n) where n is the list’s length:

def has_duplicated_elements(iterable):
    """ Given an `iterable`, return True if there are duplicated entries. """
    clean_elements_set = set()
    clean_elements_set_add = clean_elements_set.add

    for possible_duplicate_element in iterable:

        if possible_duplicate_element in clean_elements_set:
            return True

        else:
            clean_elements_set_add( possible_duplicate_element )

    return False

回答 9

我真的不知道幕后花絮是什么,所以我只想保持简单。

def dupes(num_list):
    unique = []
    dupes = []
    for i in num_list:
        if i not in unique:
            unique.append(i)
        else:
            dupes.append(i)
    if len(dupes) != 0:
        return False
    else:
        return True

I dont really know what set does behind the scenes, so I just like to keep it simple.

def dupes(num_list):
    unique = []
    dupes = []
    for i in num_list:
        if i not in unique:
            unique.append(i)
        else:
            dupes.append(i)
    if len(dupes) != 0:
        return False
    else:
        return True

回答 10

一个更简单的解决方案如下。只需使用pandas .duplicated()方法检查对/错,然后求和。另请参阅 pandas.Series.duplicated — pandas 0.24.1文档

import pandas as pd

def has_duplicated(l):
    return pd.Series(l).duplicated().sum() > 0

print(has_duplicated(['one', 'two', 'one']))
# True
print(has_duplicated(['one', 'two', 'three']))
# False

A more simple solution is as follows. Just check True/False with pandas .duplicated() method and then take sum. Please also see pandas.Series.duplicated — pandas 0.24.1 documentation

import pandas as pd

def has_duplicated(l):
    return pd.Series(l).duplicated().sum() > 0

print(has_duplicated(['one', 'two', 'one']))
# True
print(has_duplicated(['one', 'two', 'three']))
# False

回答 11

如果列表包含不可散列的项目,则可以使用Alex Martelli的解决方案,但使用列表而不是集合,尽管对于较大的输入而言,速度较慢:O(N ^ 2)。

def has_duplicates(iterable):
    seen = []
    for x in iterable:
        if x in seen:
            return True
        seen.append(x)
    return False

If the list contains unhashable items, you can use Alex Martelli’s solution but with a list instead of a set, though it’s slower for larger inputs: O(N^2).

def has_duplicates(iterable):
    seen = []
    for x in iterable:
        if x in seen:
            return True
        seen.append(x)
    return False

回答 12

为了简单起见,我使用了pyrospade的方法,并在不区分大小写的Windows注册表的简短列表中对其进行了一些修改。

如果将原始PATH值字符串拆分为单独的路径,则可以使用以下命令删除所有“空”路径(空字符串或仅包含空格的字符串):

PATH_nonulls = [s for s in PATH if s.strip()]

def HasDupes(aseq) :
    s = set()
    return any(((x.lower() in s) or s.add(x.lower())) for x in aseq)

def GetDupes(aseq) :
    s = set()
    return set(x for x in aseq if ((x.lower() in s) or s.add(x.lower())))

def DelDupes(aseq) :
    seen = set()
    return [x for x in aseq if (x.lower() not in seen) and (not seen.add(x.lower()))]

原始PATH同时具有“空”条目和重复条目,以进行测试:

[list]  Root paths in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment:PATH[list]  Root paths in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment
  1  C:\Python37\
  2
  3
  4  C:\Python37\Scripts\
  5  c:\python37\
  6  C:\Program Files\ImageMagick-7.0.8-Q8
  7  C:\Program Files (x86)\poppler\bin
  8  D:\DATA\Sounds
  9  C:\Program Files (x86)\GnuWin32\bin
 10  C:\Program Files (x86)\Intel\iCLS Client\
 11  C:\Program Files\Intel\iCLS Client\
 12  D:\DATA\CCMD\FF
 13  D:\DATA\CCMD
 14  D:\DATA\UTIL
 15  C:\
 16  D:\DATA\UHELP
 17  %SystemRoot%\system32
 18
 19
 20  D:\DATA\CCMD\FF%SystemRoot%
 21  D:\DATA\Sounds
 22  %SystemRoot%\System32\Wbem
 23  D:\DATA\CCMD\FF
 24
 25
 26  c:\
 27  %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\
 28

空路径已被删除,但仍具有重复项,例如(1,3)和(13,20):

    [list]  Null paths removed from HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment:PATH
  1  C:\Python37\
  2  C:\Python37\Scripts\
  3  c:\python37\
  4  C:\Program Files\ImageMagick-7.0.8-Q8
  5  C:\Program Files (x86)\poppler\bin
  6  D:\DATA\Sounds
  7  C:\Program Files (x86)\GnuWin32\bin
  8  C:\Program Files (x86)\Intel\iCLS Client\
  9  C:\Program Files\Intel\iCLS Client\
 10  D:\DATA\CCMD\FF
 11  D:\DATA\CCMD
 12  D:\DATA\UTIL
 13  C:\
 14  D:\DATA\UHELP
 15  %SystemRoot%\system32
 16  D:\DATA\CCMD\FF%SystemRoot%
 17  D:\DATA\Sounds
 18  %SystemRoot%\System32\Wbem
 19  D:\DATA\CCMD\FF
 20  c:\
 21  %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\

最后,这些假人已被删除:

[list]  Massaged path list from in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment:PATH
  1  C:\Python37\
  2  C:\Python37\Scripts\
  3  C:\Program Files\ImageMagick-7.0.8-Q8
  4  C:\Program Files (x86)\poppler\bin
  5  D:\DATA\Sounds
  6  C:\Program Files (x86)\GnuWin32\bin
  7  C:\Program Files (x86)\Intel\iCLS Client\
  8  C:\Program Files\Intel\iCLS Client\
  9  D:\DATA\CCMD\FF
 10  D:\DATA\CCMD
 11  D:\DATA\UTIL
 12  C:\
 13  D:\DATA\UHELP
 14  %SystemRoot%\system32
 15  D:\DATA\CCMD\FF%SystemRoot%
 16  %SystemRoot%\System32\Wbem
 17  %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\

I used pyrospade’s approach, for its simplicity, and modified that slightly on a short list made from the case-insensitive Windows registry.

If the raw PATH value string is split into individual paths all ‘null’ paths (empty or whitespace-only strings) can be removed by using:

PATH_nonulls = [s for s in PATH if s.strip()]

def HasDupes(aseq) :
    s = set()
    return any(((x.lower() in s) or s.add(x.lower())) for x in aseq)

def GetDupes(aseq) :
    s = set()
    return set(x for x in aseq if ((x.lower() in s) or s.add(x.lower())))

def DelDupes(aseq) :
    seen = set()
    return [x for x in aseq if (x.lower() not in seen) and (not seen.add(x.lower()))]

The original PATH has both ‘null’ entries and duplicates for testing purposes:

[list]  Root paths in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment:PATH[list]  Root paths in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment
  1  C:\Python37\
  2
  3
  4  C:\Python37\Scripts\
  5  c:\python37\
  6  C:\Program Files\ImageMagick-7.0.8-Q8
  7  C:\Program Files (x86)\poppler\bin
  8  D:\DATA\Sounds
  9  C:\Program Files (x86)\GnuWin32\bin
 10  C:\Program Files (x86)\Intel\iCLS Client\
 11  C:\Program Files\Intel\iCLS Client\
 12  D:\DATA\CCMD\FF
 13  D:\DATA\CCMD
 14  D:\DATA\UTIL
 15  C:\
 16  D:\DATA\UHELP
 17  %SystemRoot%\system32
 18
 19
 20  D:\DATA\CCMD\FF%SystemRoot%
 21  D:\DATA\Sounds
 22  %SystemRoot%\System32\Wbem
 23  D:\DATA\CCMD\FF
 24
 25
 26  c:\
 27  %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\
 28

Null paths have been removed, but still has duplicates, e.g., (1, 3) and (13, 20):

    [list]  Null paths removed from HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment:PATH
  1  C:\Python37\
  2  C:\Python37\Scripts\
  3  c:\python37\
  4  C:\Program Files\ImageMagick-7.0.8-Q8
  5  C:\Program Files (x86)\poppler\bin
  6  D:\DATA\Sounds
  7  C:\Program Files (x86)\GnuWin32\bin
  8  C:\Program Files (x86)\Intel\iCLS Client\
  9  C:\Program Files\Intel\iCLS Client\
 10  D:\DATA\CCMD\FF
 11  D:\DATA\CCMD
 12  D:\DATA\UTIL
 13  C:\
 14  D:\DATA\UHELP
 15  %SystemRoot%\system32
 16  D:\DATA\CCMD\FF%SystemRoot%
 17  D:\DATA\Sounds
 18  %SystemRoot%\System32\Wbem
 19  D:\DATA\CCMD\FF
 20  c:\
 21  %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\

And finally, the dupes have been removed:

[list]  Massaged path list from in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment:PATH
  1  C:\Python37\
  2  C:\Python37\Scripts\
  3  C:\Program Files\ImageMagick-7.0.8-Q8
  4  C:\Program Files (x86)\poppler\bin
  5  D:\DATA\Sounds
  6  C:\Program Files (x86)\GnuWin32\bin
  7  C:\Program Files (x86)\Intel\iCLS Client\
  8  C:\Program Files\Intel\iCLS Client\
  9  D:\DATA\CCMD\FF
 10  D:\DATA\CCMD
 11  D:\DATA\UTIL
 12  C:\
 13  D:\DATA\UHELP
 14  %SystemRoot%\system32
 15  D:\DATA\CCMD\FF%SystemRoot%
 16  %SystemRoot%\System32\Wbem
 17  %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\

回答 13

def check_duplicates(my_list):
    seen = {}
    for item in my_list:
        if seen.get(item):
            return True
        seen[item] = True
    return False
def check_duplicates(my_list):
    seen = {}
    for item in my_list:
        if seen.get(item):
            return True
        seen[item] = True
    return False