The following module functions all construct and return iterators. Some provide streams of infinite length, so they should only be accessed by functions or loops that truncate the stream.
*iterables) |
def chain(*iterables): # chain('ABC', 'DEF') --> A B C D E F for it in iterables: for element in it: yield element
[n]) |
def count(n=0): # count(10) --> 10 11 12 13 14 ... while True: yield n n += 1
Note, count() does not check for overflow and will return
negative numbers after exceeding sys.maxint
. This behavior
may change in the future.
iterable) |
def cycle(iterable): # cycle('ABCD') --> A B C D A B C D A B C D ... saved = [] for element in iterable: yield element saved.append(element) while saved: for element in saved: yield element
Note, this member of the toolkit may require significant auxiliary storage (depending on the length of the iterable).
predicate, iterable) |
def dropwhile(predicate, iterable): # dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1 iterable = iter(iterable) for x in iterable: if not predicate(x): yield x break for x in iterable: yield x
iterable[, key]) |
None
, key defaults to an
identity function and returns the element unchanged. Generally, the
iterable needs to already be sorted on the same key function.
The returned group is itself an iterator that shares the underlying iterable with groupby(). Because the source is shared, when the groupby object is advanced, the previous group is no longer visible. So, if that data is needed later, it should be stored as a list:
groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k)
groupby() is equivalent to:
class groupby(object): # [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B # [(list(g)) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D def __init__(self, iterable, key=None): if key is None: key = lambda x: x self.keyfunc = key self.it = iter(iterable) self.tgtkey = self.currkey = self.currvalue = xrange(0) def __iter__(self): return self def next(self): while self.currkey == self.tgtkey: self.currvalue = self.it.next() # Exit on StopIteration self.currkey = self.keyfunc(self.currvalue) self.tgtkey = self.currkey return (self.currkey, self._grouper(self.tgtkey)) def _grouper(self, tgtkey): while self.currkey == tgtkey: yield self.currvalue self.currvalue = self.it.next() # Exit on StopIteration self.currkey = self.keyfunc(self.currvalue)
predicate, iterable) |
True
.
If predicate is None
, return the items that are true.
Equivalent to:
def ifilter(predicate, iterable): # ifilter(lambda x: x%2, range(10)) --> 1 3 5 7 9 if predicate is None: predicate = bool for x in iterable: if predicate(x): yield x
predicate, iterable) |
False
.
If predicate is None
, return the items that are false.
Equivalent to:
def ifilterfalse(predicate, iterable): # ifilterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8 if predicate is None: predicate = bool for x in iterable: if not predicate(x): yield x
function, *iterables) |
None
, then
imap() returns the arguments as a tuple. Like
map() but stops when the shortest iterable is exhausted
instead of filling in None
for shorter iterables. The reason
for the difference is that infinite iterator arguments are typically
an error for map() (because the output is fully evaluated)
but represent a common and useful way of supplying arguments to
imap().
Equivalent to:
def imap(function, *iterables): # imap(pow, (2,3,10), (5,2,3)) --> 32 9 1000 iterables = map(iter, iterables) while True: args = [i.next() for i in iterables] if function is None: yield tuple(args) else: yield function(*args)
iterable, [start,] stop [, step]) |
None
, then iteration continues until
the iterator is exhausted, if at all; otherwise, it stops at the specified
position. Unlike regular slicing,
islice() does not support negative values for start,
stop, or step. Can be used to extract related fields
from data where the internal structure has been flattened (for
example, a multi-line report may list a name field on every
third line). Equivalent to:
def islice(iterable, *args): # islice('ABCDEFG', 2) --> A B # islice('ABCDEFG', 2, 4) --> C D # islice('ABCDEFG', 2, None) --> C D E F G # islice('ABCDEFG', 0, None, 2) --> A C E G s = slice(*args) it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1)) nexti = it.next() for i, element in enumerate(iterable): if i == nexti: yield element nexti = it.next()
If start is None
, then iteration starts at zero.
If step is None
, then the step defaults to one.
Changed in version 2.5:
accept None
values for default start and
step.
*iterables) |
def izip(*iterables): # izip('ABCD', 'xy') --> Ax By iterables = map(iter, iterables) while iterables: result = [it.next() for it in iterables] yield tuple(result)
Changed in version 2.4: When no iterables are specified, returns a zero length iterator instead of raising a TypeError exception.
Note, the left-to-right evaluation order of the iterables is guaranteed. This makes possible an idiom for clustering a data series into n-length groups using "izip(*[iter(s)]*n)". For data that doesn't fit n-length groups exactly, the last tuple can be pre-padded with fill values using "izip(*[chain(s, [None]*(n-1))]*n)".
Note, when izip() is used with unequal length inputs, subsequent
iteration over the longer iterables cannot reliably be continued after
izip() terminates. Potentially, up to one entry will be missing
from each of the left-over iterables. This occurs because a value is fetched
from each iterator in-turn, but the process ends when one of the iterators
terminates. This leaves the last fetched values in limbo (they cannot be
returned in a final, incomplete tuple and they are cannot be pushed back
into the iterator for retrieval with it.next()
). In general,
izip() should only be used with unequal length inputs when you
don't care about trailing, unmatched values from the longer iterables.
object[, times]) |
def repeat(object, times=None): # repeat(10, 3) --> 10 10 10 if times is None: while True: yield object else: for i in xrange(times): yield object
function, iterable) |
function(a,b)
and function(*c)
.
Equivalent to:
def starmap(function, iterable): # starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000 iterable = iter(iterable) while True: yield function(*iterable.next())
predicate, iterable) |
def takewhile(predicate, iterable): # takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4 for x in iterable: if predicate(x): yield x else: break
iterable[, n=2]) |
n==2
is equivalent to:
def tee(iterable): def gen(next, data={}, cnt=[0]): for i in count(): if i == cnt[0]: item = data[i] = next() cnt[0] += 1 else: item = data.pop(i) yield item it = iter(iterable) return (gen(it.next), gen(it.next))
Note, once tee() has made a split, the original iterable should not be used anywhere else; otherwise, the iterable could get advanced without the tee objects being informed.
Note, this member of the toolkit may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator is going to use most or all of the data before the other iterator, it is faster to use list() instead of tee(). New in version 2.4.
See About this document... for information on suggesting changes.