with open('your_file.txt', 'w') as f:
for item in my_list:
f.write("%s\n" % item)
In Python 2, you can also use
with open('your_file.txt', 'w') as f:
for item in my_list:
print >> f, item
If you’re keen on a single function call, at least remove the square brackets [], so that the strings to be printed get made one at a time (a genexp rather than a listcomp) — no reason to take up all the memory required to materialize the whole list of strings.
In[1]:import os
In[2]: f = file(os.devnull,"w")In[3]:%timeit f.writelines("%s\n"% item for item in xrange(2**20))1 loops, best of 3:385 ms per loop
In[4]:%timeit f.writelines(["%s\n"% item for item in xrange(2**20)])
ERROR:InternalPython error in the inspect module.Belowis the traceback from this internal error.Traceback(most recent call last):...MemoryError
In[4]:%timeit f.writelines("%s\n"% item for item in xrange(2**20))1 loops, best of 3:370 ms per loop
In[5]:%timeit f.writelines(["%s\n"% item for item in xrange(2**20)])1 loops, best of 3:360 ms per loop
I thought it would be interesting to explore the benefits of using a genexp, so here’s my take.
The example in the question uses square brackets to create a temporary list, and so is equivalent to:
file.writelines( list( "%s\n" % item for item in list ) )
Which needlessly constructs a temporary list of all the lines that will be written out, this may consume significant amounts of memory depending on the size of your list and how verbose the output of str(item) is.
Drop the square brackets (equivalent to removing the wrapping list() call above) will instead pass a temporary generator to file.writelines():
file.writelines( "%s\n" % item for item in list )
This generator will create newline-terminated representation of your item objects on-demand (i.e. as they are written out). This is nice for a couple of reasons:
Memory overheads are small, even for very large lists
If str(item) is slow there’s visible progress in the file as each item is processed
This avoids memory issues, such as:
In [1]: import os
In [2]: f = file(os.devnull, "w")
In [3]: %timeit f.writelines( "%s\n" % item for item in xrange(2**20) )
1 loops, best of 3: 385 ms per loop
In [4]: %timeit f.writelines( ["%s\n" % item for item in xrange(2**20)] )
ERROR: Internal Python error in the inspect module.
Below is the traceback from this internal error.
Traceback (most recent call last):
...
MemoryError
(I triggered this error by limiting Python’s max. virtual memory to ~100MB with ulimit -v 102400).
Putting memory usage to one side, this method isn’t actually any faster than the original:
In [4]: %timeit f.writelines( "%s\n" % item for item in xrange(2**20) )
1 loops, best of 3: 370 ms per loop
In [5]: %timeit f.writelines( ["%s\n" % item for item in xrange(2**20)] )
1 loops, best of 3: 360 ms per loop
(Python 2.6.2 on Linux)
回答 6
因为我很懒…
import json
a =[1,2,3]with open('test.txt','w')as f:
f.write(json.dumps(a))#Now read the file back into a Python list objectwith open('test.txt','r')as f:
a = json.loads(f.read())
import json
a = [1,2,3]
with open('test.txt', 'w') as f:
f.write(json.dumps(a))
#Now read the file back into a Python list object
with open('test.txt', 'r') as f:
a = json.loads(f.read())
Serialize list into text file with comma sepparated value
mylist = dir()
with open('filename.txt','w') as f:
f.write( ','.join( mylist ) )
回答 8
一般来说
以下是writelines()方法的语法
fileObject.writelines( sequence )
例
#!/usr/bin/python# Open a file
fo = open("foo.txt","rw+")
seq =["This is 6th line\n","This is 7th line"]# Write sequence of lines at the end of the file.
line = fo.writelines( seq )# Close opend file
fo.close()
#!/usr/bin/python
# Open a file
fo = open("foo.txt", "rw+")
seq = ["This is 6th line\n", "This is 7th line"]
# Write sequence of lines at the end of the file.
line = fo.writelines( seq )
# Close opend file
fo.close()
outfile = open('outfile.txt','w')# open a file in write modefor item in list_to_persistence:# iterate over the list items
outfile.write(str(item)+'\n')# write to the file
outfile.close()# close the file
This logic will first convert the items in list to string(str). Sometimes the list contains a tuple like
alist = [(i12,tiger),
(113,lion)]
This logic will write to file each tuple in a new line. We can later use eval while loading each tuple when reading the file:
outfile = open('outfile.txt', 'w') # open a file in write mode
for item in list_to_persistence: # iterate over the list items
outfile.write(str(item) + '\n') # write to the file
outfile.close() # close the file
回答 14
迭代和添加换行符的另一种方法:
for item in items:
filewriter.write(f"{item}"+"\n")
In [29]: a = n.array((avg))
In [31]: a.tofile('avgpoints.dat',sep='\n',dtype = '%f')
You can use %e or %s depending on your requirement.
回答 18
poem ='''\
Programming is fun
When the work is done
if you wanna make your work also fun:
use Python!
'''
f = open('poem.txt','w')# open for 'w'riting
f.write(poem)# write text to file
f.close()# close the file
poem = '''\
Programming is fun
When the work is done
if you wanna make your work also fun:
use Python!
'''
f = open('poem.txt', 'w') # open for 'w'riting
f.write(poem) # write text to file
f.close() # close the file
How It Works:
First, open a file by using the built-in open function and specifying the name of
the file and the mode in which we want to open the file. The mode can be a
read mode (’r’), write mode (’w’) or append mode (’a’). We can also specify
whether we are reading, writing, or appending in text mode (’t’) or binary
mode (’b’). There are actually many more modes available and help(open)
will give you more details about them. By default, open() considers the file to
be a ’t’ext file and opens it in ’r’ead mode.
In our example, we first open the file in write text mode and use the write
method of the file object to write to the file and then we finally close the file.
The above example is from the book “A Byte of Python” by Swaroop C H.swaroopch.com
I want to change a couple of files at one time, iff I can write to all of them. I’m wondering if I somehow can combine the multiple open calls with the with statement:
try:
with open('a', 'w') as a and open('b', 'w') as b:
do_something()
except IOError as e:
print 'Operation failed: %s' % e.strerror
If that’s not possible, what would an elegant solution to this problem look like?
回答 0
从Python 2.7(或分别为3.1)开始,您可以编写
with open('a','w')as a, open('b','w')as b:
do_something()
As of Python 2.7 (or 3.1 respectively) you can write
with open('a', 'w') as a, open('b', 'w') as b:
do_something()
In earlier versions of Python, you can sometimes use
contextlib.nested() to nest context managers. This won’t work as expected for opening multiples files, though — see the linked documentation for details.
In the rare case that you want to open a variable number of files all at the same time, you can use contextlib.ExitStack, starting from Python version 3.3:
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
# Do something with "files"
Most of the time you have a variable set of files, you likely want to open them one after the other, though.
回答 1
只需替换and为,,您就可以完成:
try:with open('a','w')as a, open('b','w')as b:
do_something()exceptIOErroras e:print'Operation failed: %s'% e.strerror
For opening many files at once or for long file paths, it may be useful to break things up over multiple lines. From the Python Style Guide as suggested by @Sven Marnach in comments to another answer:
with open('/path/to/InFile.ext', 'r') as file_1, \
open('/path/to/OutFile.ext', 'w') as file_2:
file_2.write(file_1.read())
回答 3
嵌套语句可以完成相同的工作,我认为处理起来更简单。
假设您有inFile.txt,并想同时将其写入两个outFile。
with open("inFile.txt",'r')as fr:with open("outFile1.txt",'w')as fw1:with open("outFile2.txt",'w')as fw2:for line in fr.readlines():
fw1.writelines(line)
fw2.writelines(line)
Nested with statements will do the same job, and in my opinion, are more straightforward to deal with.
Let’s say you have inFile.txt, and want to write it into two outFile’s simultaneously.
with open("inFile.txt", 'r') as fr:
with open("outFile1.txt", 'w') as fw1:
with open("outFile2.txt", 'w') as fw2:
for line in fr.readlines():
fw1.writelines(line)
fw2.writelines(line)
EDIT:
I don’t understand the reason of the downvote. I tested my code before publishing my answer, and it works as desired: It writes to all of outFile’s, just as the question asks. No duplicate writing or failing to write. So I am really curious to know why my answer is considered to be wrong, suboptimal or anything like that.
withExitStack()as stack:
files =[stack.enter_context(open(fname))for fname in filenames]# All opened files will automatically be closed at the end of# the with statement, even if attempts to open files later# in the list raise an exception
如果您对这些细节感兴趣,下面是一个通用示例,以说明ExitStack操作方式:
from contextlib importExitStackclass X:
num =1def __init__(self):
self.num = X.num
X.num +=1def __repr__(self):
cls = type(self)return'{cls.__name__}{self.num}'.format(cls=cls, self=self)def __enter__(self):print('enter {!r}'.format(self))return self.num
def __exit__(self, exc_type, exc_value, traceback):print('exit {!r}'.format(self))returnTrue
xs =[X()for _ in range(3)]withExitStack()as stack:print(len(stack._exit_callbacks))# number of callbacks called on exit
nums =[stack.enter_context(x)for x in xs]print(len(stack._exit_callbacks))print(len(stack._exit_callbacks))print(nums)
输出:
0
enter X1
enter X2
enter X3
3
exit X3
exit X2
exit X1
0[1,2,3]
Since Python 3.3, you can use the class ExitStack from the contextlib module to safely open an arbitrary number of files.
It can manage a dynamic number of context-aware objects, which means that it will prove especially useful if you don’t know how many files you are going to handle.
In fact, the canonical use-case that is mentioned in the documentation is managing a dynamic number of files.
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
# All opened files will automatically be closed at the end of
# the with statement, even if attempts to open files later
# in the list raise an exception
If you are interested in the details, here is a generic example in order to explain how ExitStack operates:
from contextlib import ExitStack
class X:
num = 1
def __init__(self):
self.num = X.num
X.num += 1
def __repr__(self):
cls = type(self)
return '{cls.__name__}{self.num}'.format(cls=cls, self=self)
def __enter__(self):
print('enter {!r}'.format(self))
return self.num
def __exit__(self, exc_type, exc_value, traceback):
print('exit {!r}'.format(self))
return True
xs = [X() for _ in range(3)]
with ExitStack() as stack:
print(len(stack._exit_callbacks)) # number of callbacks called on exit
nums = [stack.enter_context(x) for x in xs]
print(len(stack._exit_callbacks))
print(len(stack._exit_callbacks))
print(nums)
Output:
0
enter X1
enter X2
enter X3
3
exit X3
exit X2
exit X1
0
[1, 2, 3]
With python 2.6 It will not work, we have to use below way to open multiple files:
with open('a', 'w') as a:
with open('b', 'w') as b:
回答 6
回答较晚(8年),但对于希望将多个文件合并为一个文件的人,以下功能可能会有所帮助:
def multi_open(_list):
out=""for x in _list:try:with open(x)as f:
out+=f.read()except:pass# print(f"Cannot open file {x}")return(out)
fl =["C:/bdlog.txt","C:/Jts/tws.vmoptions","C:/not.exist"]print(multi_open(fl))
2018-10-2319:18:11.361 PROFILE [StopDrivers][1ms]2018-10-2319:18:11.361 PROFILE [Parental uninit][0ms]...# This file contains VM parameters for Trader Workstation.# Each parameter should be defined in a separate line and the...
Late answer (8 yrs), but for someone looking to join multiple files into one, the following function may be of help:
def multi_open(_list):
out=""
for x in _list:
try:
with open(x) as f:
out+=f.read()
except:
pass
# print(f"Cannot open file {x}")
return(out)
fl = ["C:/bdlog.txt", "C:/Jts/tws.vmoptions", "C:/not.exist"]
print(multi_open(fl))
2018-10-23 19:18:11.361 PROFILE [Stop Drivers] [1ms]
2018-10-23 19:18:11.361 PROFILE [Parental uninit] [0ms]
...
# This file contains VM parameters for Trader Workstation.
# Each parameter should be defined in a separate line and the
...
What is the best way to open a file as read/write if it exists, or if it does not, then create it and open it as read/write? From what I read, file = open('myfile.dat', 'rw') should do this, right?
It is not working for me (Python 2.6.2) and I’m wondering if it is a version problem, or not supposed to work like that or what.
The bottom line is, I just need a solution for the problem. I am curious about the other stuff, but all I need is a nice way to do the opening part.
The enclosing directory was writeable by user and group, not other (I’m on a Linux system… so permissions 775 in other words), and the exact error was:
f.seek(pos [,(0|1|2)])
pos .. position of the r/w pointer
[].. optionally
().. one of ->0.. absolute position
1.. relative position to current
2.. relative position from end
The advantage of the following approach is that the file is properly closed at the block’s end, even if an exception is raised on the way. It’s equivalent to try-finally, but much shorter.
with open("file.dat","a+") as f:
f.write(...)
...
a+ Opens a file for both appending and reading. The file pointer is
at the end of the file if the file exists. The file opens in the
append mode. If the file does not exist, it creates a new file for
reading and writing. –Python file modes
f.seek(pos [, (0|1|2)])
pos .. position of the r/w pointer
[] .. optionally
() .. one of ->
0 .. absolute position
1 .. relative position to current
2 .. relative position from end
Only “rwab+” characters are allowed; there must be exactly one of “rwa” – see Stack Overflow question Python file modes detail.
import os
writepath = 'some/path/to/file.txt'
mode = 'a' if os.path.exists(writepath) else 'w'
with open(writepath, mode) as f:
f.write('Hello, world!\n')
from pathlib importPath
filename =Path('myfile.txt')
filename.touch(exist_ok=True)# will create file, if it exists will do nothing
file = open(filename)
Since python 3.4 you should use pathlib to “touch” files.
It is a much more elegant solution than the proposed ones in this thread.
from pathlib import Path
filename = Path('myfile.txt')
filename.touch(exist_ok=True) # will create file, if it exists will do nothing
file = open(filename)
Same thing with directories:
filename.mkdir(parents=True, exist_ok=True)
回答 6
我的答案:
file_path ='myfile.dat'try:
fp = open(file_path)exceptIOError:# If not exists, create the file
fp = open(file_path,'w+')
file_path = 'myfile.dat'
try:
fp = open(file_path)
except IOError:
# If not exists, create the file
fp = open(file_path, 'w+')
回答 7
'''
w write mode
r read mode
a append mode
w+ create file if it doesn't exist and open it in write mode
r+ open for reading and writing. Does not create file.
a+ create file if it doesn't exist and open it in append mode
'''
例:
file_name ='my_file.txt'
f = open(file_name,'w+')# open file in write mode
f.write('python rules')
f.close()
'''
w write mode
r read mode
a append mode
w+ create file if it doesn't exist and open it in write mode
r+ open for reading and writing. Does not create file.
a+ create file if it doesn't exist and open it in append mode
'''
example:
file_name = 'my_file.txt'
f = open(file_name, 'w+') # open file in write mode
f.write('python rules')
f.close()
I hope this helps. [FYI am using python version 3.6.2]
回答 8
open('myfile.dat', 'a') 为我工作,就好。
在py3k中,您的代码将引发ValueError:
>>> open('myfile.dat','rw')Traceback(most recent call last):File"<pyshell#34>", line 1,in<module>
open('myfile.dat','rw')ValueError: must have exactly one of read/write/append mode
>>> open('myfile.dat', 'rw')
Traceback (most recent call last):
File "<pyshell#34>", line 1, in <module>
open('myfile.dat', 'rw')
ValueError: must have exactly one of read/write/append mode
in python-2.6 it raises IOError.
回答 9
采用:
import os
f_loc = r"C:\Users\Russell\Desktop\myfile.dat"# Create the file if it does not existifnot os.path.exists(f_loc):
open(f_loc,'w').close()# Open the file for appending and readingwith open(f_loc,'a+')as f:#Do stuff
import os
f_loc = r"C:\Users\Russell\Desktop\myfile.dat"
# Create the file if it does not exist
if not os.path.exists(f_loc):
open(f_loc, 'w').close()
# Open the file for appending and reading
with open(f_loc, 'a+') as f:
#Do stuff
Note: Files have to be closed after you open them, and the with context manager is a nice way of letting Python take care of this for you.
Put w+ for writing the file, truncating if it exist, r+ to read the file, creating one if it don’t exist but not writing (and returning null) or a+ for creating a new file or appending to a existing one.
If you want to open it to read and write, I’m assuming you don’t want to truncate it as you open it and you want to be able to read the file right after opening it. So this is the solution I’m using:
then create a variable named save_file and set it to file you want to make html or txt in this case a txt file
save_file = "history.txt"
then define a function that will use os.path.is file method to check if file exist and if not
it will create a file
def check_into():
if os.path.isfile(save_file):
print("history file exists..... \nusing for writting....")
else:
print("history file not exists..... \ncreating it..... ")
file = open(save_file, 'w')
time.sleep(2)
print('file created ')
file.close()
and at last call the function
check_into()
回答 16
import os, platform
os.chdir('c:\\Users\\MS\\Desktop')try:
file = open("Learn Python.txt","a")print('this file is exist')except:print('this file is not exist')
file.write('\n''Hello Ashok')
fhead = open('Learn Python.txt')for line in fhead:
words = line.split()print(words)
import os, platform
os.chdir('c:\\Users\\MS\\Desktop')
try :
file = open("Learn Python.txt","a")
print('this file is exist')
except:
print('this file is not exist')
file.write('\n''Hello Ashok')
fhead = open('Learn Python.txt')
for line in fhead:
words = line.split()
print(words)
def my_func():"""
this function return some value
:return:
"""return25.256def write_file(data):"""
this function write data to file
:param data:
:return:
"""
file_name = r'D:\log.txt'with open(file_name,'w')as x_file:
x_file.write('{} TotalAmount'.format(data))def run():
data = my_func()
write_file(data)
run()
this is the example of Python Print String To Text File
def my_func():
"""
this function return some value
:return:
"""
return 25.256
def write_file(data):
"""
this function write data to file
:param data:
:return:
"""
file_name = r'D:\log.txt'
with open(file_name, 'w') as x_file:
x_file.write('{} TotalAmount'.format(data))
def run():
data = my_func()
write_file(data)
run()
首先,检查文件或文件夹是否存在,然后仅删除该文件。这可以通过两种方式实现:
一。os.path.isfile("/path/to/file")
b。采用exception handling.
实例为os.path.isfile
#!/usr/bin/pythonimport os
myfile="/tmp/foo.txt"## If file exists, delete it ##if os.path.isfile(myfile):
os.remove(myfile)else:## Show an error ##print("Error: %s file not found"% myfile)
异常处理
#!/usr/bin/pythonimport os## Get input ##
myfile= raw_input("Enter file name to delete: ")## Try to delete the file ##try:
os.remove(myfile)exceptOSErroras e:## if failed, report it back to the user ##print("Error: %s - %s."%(e.filename, e.strerror))
#!/usr/bin/pythonimport osimport sysimport shutil# Get directory name
mydir= raw_input("Enter directory name: ")## Try to remove tree; if failed show an error using try...except on screentry:
shutil.rmtree(mydir)exceptOSErroras e:print("Error: %s - %s."%(e.filename, e.strerror))
Unlink method used to remove the file or the symbolik link.
If missing_ok is false (the default), FileNotFoundError is raised if the path does not exist.
If missing_ok is true, FileNotFoundError exceptions will be ignored (same behavior as the POSIX rm -f command).
Changed in version 3.8: The missing_ok parameter was added.
Best practice
First, check whether the file or folder exists or not then only delete that file. This can be achieved in two ways :
a. os.path.isfile("/path/to/file")
b. Use exception handling.
EXAMPLE for os.path.isfile
#!/usr/bin/python
import os
myfile="/tmp/foo.txt"
## If file exists, delete it ##
if os.path.isfile(myfile):
os.remove(myfile)
else: ## Show an error ##
print("Error: %s file not found" % myfile)
Exception Handling
#!/usr/bin/python
import os
## Get input ##
myfile= raw_input("Enter file name to delete: ")
## Try to delete the file ##
try:
os.remove(myfile)
except OSError as e: ## if failed, report it back to the user ##
print ("Error: %s - %s." % (e.filename, e.strerror))
RESPECTIVE OUTPUT
Enter file name to delete : demo.txt
Error: demo.txt - No such file or directory.
Enter file name to delete : rrr.txt
Error: rrr.txt - Operation not permitted.
Enter file name to delete : foo.txt
Python syntax to delete a folder
shutil.rmtree()
Example for shutil.rmtree()
#!/usr/bin/python
import os
import sys
import shutil
# Get directory name
mydir= raw_input("Enter directory name: ")
## Try to remove tree; if failed show an error using try...except on screen
try:
shutil.rmtree(mydir)
except OSError as e:
print ("Error: %s - %s." % (e.filename, e.strerror))
def remove(path):""" param <path> could either be relative or absolute. """if os.path.isfile(path)or os.path.islink(path):
os.remove(path)# remove the fileelif os.path.isdir(path):
shutil.rmtree(path)# remove dir and all containselse:raiseValueError("file {} is not a file or dir.".format(path))
Here is a robust function that uses both os.remove and shutil.rmtree:
def remove(path):
""" param <path> could either be relative or absolute. """
if os.path.isfile(path) or os.path.islink(path):
os.remove(path) # remove the file
elif os.path.isdir(path):
shutil.rmtree(path) # remove dir and all contains
else:
raise ValueError("file {} is not a file or dir.".format(path))
from pathlib importPath# .home() is new in 3.5, otherwise use os.path.expanduser('~')
directory_path =Path.home()/'directory'
directory_path.mkdir()
file_path = directory_path /'file'
file_path.touch()
Note that you can also use relative paths with Path objects, and you can check your current working directory with Path.cwd.
For removing individual files and directories in Python 2, see the section so labeled below.
To remove a directory with contents, use shutil.rmtree, and note that this is available in Python 2 and 3:
from shutil import rmtree
rmtree(dir_path)
Demonstration
New in Python 3.4 is the Path object.
Let’s use one to create a directory and file to demonstrate usage. Note that we use the / to join the parts of the path, this works around issues between operating systems and issues from using backslashes on Windows (where you’d need to either double up your backslashes like \\ or use raw strings, like r"foo\bar"):
from pathlib import Path
# .home() is new in 3.5, otherwise use os.path.expanduser('~')
directory_path = Path.home() / 'directory'
directory_path.mkdir()
file_path = directory_path / 'file'
file_path.touch()
from os import rmdir
rmdir(join(expanduser('~'), 'directory'))
Note that there is also a os.removedirs – it only removes empty directories recursively, but it may suit your use-case.
回答 6
import os
folder ='/Path/to/yourDir/'
fileList = os.listdir(folder)for f in fileList:
filePath = folder +'/'+f
if os.path.isfile(filePath):
os.remove(filePath)elif os.path.isdir(filePath):
newFileList = os.listdir(filePath)for f1 in newFileList:
insideFilePath = filePath +'/'+ f1
if os.path.isfile(insideFilePath):
os.remove(insideFilePath)
import os
folder = '/Path/to/yourDir/'
fileList = os.listdir(folder)
for f in fileList:
filePath = folder + '/'+f
if os.path.isfile(filePath):
os.remove(filePath)
elif os.path.isdir(filePath):
newFileList = os.listdir(filePath)
for f1 in newFileList:
insideFilePath = filePath + '/' + f1
if os.path.isfile(insideFilePath):
os.remove(insideFilePath)
回答 7
shutil.rmtree是异步函数,因此,如果要检查它是否完成,可以使用while … loop
import os
import shutil
shutil.rmtree(path)while os.path.exists(path):passprint('done')
Both functions are semantically same. This functions removes (deletes) the file path. If path is not a file and it is directory, then exception is raised.
import os
import glob
files = glob.glob(os.path.join('path/to/folder/*'))
files = glob.glob(os.path.join('path/to/folder/*.csv'))//It will give all csv files in folder
for file in files:
os.remove(file)
删除目录中的所有文件夹
from shutil import rmtree
import os
// os.path.join()# current working directory.for dirct in os.listdir(os.path.join('path/to/folder')):
rmtree(os.path.join('path/to/folder',dirct))
import os
import glob
files = glob.glob(os.path.join('path/to/folder/*'))
files = glob.glob(os.path.join('path/to/folder/*.csv')) // It will give all csv files in folder
for file in files:
os.remove(file)
To remove all folders in a directory
from shutil import rmtree
import os
// os.path.join() # current working directory.
for dirct in os.listdir(os.path.join('path/to/folder')):
rmtree(os.path.join('path/to/folder',dirct))
from __future__ import print_function # Only needed for Python 2print("hi there", file=f)
对于Python 3,您不需要import,因为该 print()功能是默认设置。
替代方法是使用:
f = open('myfile','w')
f.write('hi there\n')# python will convert \n to os.linesep
f.close()# you can omit in most cases as the destructor will call it
You should use the print() function which is available since Python 2.6+
from __future__ import print_function # Only needed for Python 2
print("hi there", file=f)
For Python 3 you don’t need the import, since the print() function is the default.
The alternative would be to use:
f = open('myfile', 'w')
f.write('hi there\n') # python will convert \n to os.linesep
f.close() # you can omit in most cases as the destructor will call it
On output, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string.
It is good practice to use the ‘with’ keyword when dealing with file
objects. This has the advantage that the file is properly closed after
its suite finishes, even if an exception is raised on the way. It is
also much shorter than writing equivalent try-finally blocks.
回答 3
关于os.linesep:
这是Windows上未经编辑的Python 2.7.1解释器的确切会话:
Python2.7.1(r271:86832,Nov272010,18:30:46)[MSC v.150032 bit (Intel)] on
win32
Type"help","copyright","credits"or"license"for more information.>>>import os
>>> os.linesep
'\r\n'>>> f = open('myfile','w')>>> f.write('hi there\n')>>> f.write('hi there'+ os.linesep)# same result as previous line ?????????>>> f.close()>>> open('myfile','rb').read()'hi there\r\nhi there\r\r\n'>>>
Here is an exact unedited Python 2.7.1 interpreter session on Windows:
Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.linesep
'\r\n'
>>> f = open('myfile','w')
>>> f.write('hi there\n')
>>> f.write('hi there' + os.linesep) # same result as previous line ?????????
>>> f.close()
>>> open('myfile', 'rb').read()
'hi there\r\nhi there\r\r\n'
>>>
On Windows:
As expected, os.linesep does NOT produce the same outcome as '\n'. There is no way that it could produce the same outcome. 'hi there' + os.linesep is equivalent to 'hi there\r\n', which is NOT equivalent to 'hi there\n'.
It’s this simple: use \n which will be translated automatically to os.linesep. And it’s been that simple ever since the first port of Python to Windows.
There is no point in using os.linesep on non-Windows systems, and it produces wrong results on Windows.
DO NOT USE os.linesep!
回答 4
我认为没有“正确”的方法。
我会用:
with open ('myfile','a')as f: f.write ('hi there\n')
import time
start = start = time.time()with open("test.txt",'w')as f:for i in range(10000000):# print('This is a speed test', file=f)# f.write('This is a speed test\n')
end = time.time()print(end - start)
If you are writing a lot of data and speed is a concern you should probably go with f.write(...). I did a quick speed comparison and it was considerably faster than print(..., file=f) when performing a large number of writes.
import time
start = start = time.time()
with open("test.txt", 'w') as f:
for i in range(10000000):
# print('This is a speed test', file=f)
# f.write('This is a speed test\n')
end = time.time()
print(end - start)
On average write finished in 2.45s on my machine, whereas print took about 4 times as long (9.76s). That being said, in most real-world scenarios this will not be an issue.
If you choose to go with print(..., file=f) you will probably find that you’ll want to suppress the newline from time to time, or replace it with something else. This can be done by setting the optional end parameter, e.g.;
with open("test", 'w') as f:
print('Foo1,', file=f, end='')
print('Foo2,', file=f, end='')
print('Foo3', file=f)
Whichever way you choose I’d suggest using with since it makes the code much easier to read.
Update: This difference in performance is explained by the fact that write is highly buffered and returns before any writes to disk actually take place (see this answer), whereas print (probably) uses line buffering. A simple test for this would be to check performance for long writes as well, where the disadvantages (in terms of speed) for line buffering would be less pronounced.
start = start = time.time()
long_line = 'This is a speed test' * 100
with open("test.txt", 'w') as f:
for i in range(1000000):
# print(long_line, file=f)
# f.write(long_line + '\n')
end = time.time()
print(end - start, "s")
The performance difference now becomes much less pronounced, with an average time of 2.20s for write and 3.10s for print. If you need to concatenate a bunch of strings to get this loooong line performance will suffer, so use-cases where print would be more efficient are a bit rare.
When you said Line it means some serialized characters which are ended to ‘\n’ characters. Line should be last at some point so we should consider ‘\n’ at the end of each line. Here is solution:
with open('YOURFILE.txt', 'a') as the_file:
the_file.write("Hello")
in append mode after each write the cursor move to new line, if you want to use w mode you should add \n characters at the end of the write() function:
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith('.txt'):
print file
回答 3
这样的事情会起作用:
>>>import os>>> path ='/usr/share/cups/charmaps'>>> text_files =[f for f in os.listdir(path)if f.endswith('.txt')]>>> text_files['euc-cn.txt','euc-jp.txt','euc-kr.txt','euc-tw.txt',...'windows-950.txt']
>>> import os
>>> path = '/usr/share/cups/charmaps'
>>> text_files = [f for f in os.listdir(path) if f.endswith('.txt')]
>>> text_files
['euc-cn.txt', 'euc-jp.txt', 'euc-kr.txt', 'euc-tw.txt', ... 'windows-950.txt']
for txt_file in pathlib.Path('your_directory').glob('*.txt'):
# do something with "txt_file"
If you want it recursive you can use .glob('**/*.txt)
1The pathlib module was included in the standard library in python 3.4. But you can install back-ports of that module even on older Python versions (i.e. using conda or pip): pathlib and pathlib2.
回答 5
import os
path ='mypath/path'
files = os.listdir(path)
files_txt =[i for i in files if i.endswith('.txt')]
import os
for root, dirs, files in os.walk(dir):for f in files:if os.path.splitext(f)[1]=='.txt':
fullpath = os.path.join(root, f)print(fullpath)
或使用生成器:
import os
fileiter =(os.path.join(root, f)for root, _, files in os.walk(dir)for f in files)
txtfileiter =(f for f in fileiter if os.path.splitext(f)[1]=='.txt')for txt in txtfileiter:print(txt)
import os
for root, dirs, files in os.walk(dir):
for f in files:
if os.path.splitext(f)[1] == '.txt':
fullpath = os.path.join(root, f)
print(fullpath)
Or with generators:
import os
fileiter = (os.path.join(root, f)
for root, _, files in os.walk(dir)
for f in files)
txtfileiter = (f for f in fileiter if os.path.splitext(f)[1] == '.txt')
for txt in txtfileiter:
print(txt)
from path import path
p = path('/path/to/the/directory')
for f in p.files(pattern='*.txt'):
print f
回答 9
Python v3.5 +
在递归函数中使用os.scandir的快速方法。在文件夹和子文件夹中搜索具有指定扩展名的所有文件。
import os
def findFilesInFolder(path, pathList, extension, subFolders =True):""" Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
path: Base directory to find files
pathList: A list that stores all paths
extension: File extension to find
subFolders: Bool. If True, find files in all subfolders under path. If False, only searches files in the specified folder
"""try:# Trapping a OSError: File permissions problem I believefor entry in os.scandir(path):if entry.is_file()and entry.path.endswith(extension):
pathList.append(entry.path)elif entry.is_dir()and subFolders:# if its a directory, then repeat process as a nested function
pathList = findFilesInFolder(entry.path, pathList, extension, subFolders)exceptOSError:print('Cannot access '+ path +'. Probably a permissions error')return pathList
dir_name = r'J:\myDirectory'
extension =".txt"
pathList =[]
pathList = findFilesInFolder(dir_name, pathList, extension,True)
import os
import re
import pandas as pd
import numpy as np
def findFilesInFolderYield(path, extension, containsTxt='', subFolders =True, excludeText =''):""" Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
path: Base directory to find files
extension: File extension to find. e.g. 'txt'. Regular expression. Or 'ls\d' to match ls1, ls2, ls3 etc
containsTxt: List of Strings, only finds file if it contains this text. Ignore if '' (or blank)
subFolders: Bool. If True, find files in all subfolders under path. If False, only searches files in the specified folder
excludeText: Text string. Ignore if ''. Will exclude if text string is in path.
"""if type(containsTxt)== str:# if a string and not in a list
containsTxt =[containsTxt]
myregexobj = re.compile('\.'+ extension +'$')# Makes sure the file extension is at the end and is preceded by a .try:# Trapping a OSError or FileNotFoundError: File permissions problem I believefor entry in os.scandir(path):if entry.is_file()and myregexobj.search(entry.path):#
bools =[Truefor txt in containsTxt if txt in entry.path and(excludeText ==''or excludeText notin entry.path)]if len(bools)== len(containsTxt):yield entry.stat().st_size, entry.stat().st_atime_ns, entry.stat().st_mtime_ns, entry.stat().st_ctime_ns, entry.path
elif entry.is_dir()and subFolders:# if its a directory, then repeat process as a nested functionyieldfrom findFilesInFolderYield(entry.path, extension, containsTxt, subFolders)exceptOSErroras ose:print('Cannot access '+ path +'. Probably a permissions error ', ose)exceptFileNotFoundErroras fnf:print(path +' not found ', fnf)def findFilesInFolderYieldandGetDf(path, extension, containsTxt, subFolders =True, excludeText =''):""" Converts returned data from findFilesInFolderYield and creates and Pandas Dataframe.
Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
path: Base directory to find files
extension: File extension to find. e.g. 'txt'. Regular expression. Or 'ls\d' to match ls1, ls2, ls3 etc
containsTxt: List of Strings, only finds file if it contains this text. Ignore if '' (or blank)
subFolders: Bool. If True, find files in all subfolders under path. If False, only searches files in the specified folder
excludeText: Text string. Ignore if ''. Will exclude if text string is in path.
"""
fileSizes, accessTimes, modificationTimes, creationTimes , paths = zip(*findFilesInFolderYield(path, extension, containsTxt, subFolders))
df = pd.DataFrame({'FLS_File_Size':fileSizes,'FLS_File_Access_Date':accessTimes,'FLS_File_Modification_Date':np.array(modificationTimes).astype('timedelta64[ns]'),'FLS_File_Creation_Date':creationTimes,'FLS_File_PathName':paths,})
df['FLS_File_Modification_Date']= pd.to_datetime(df['FLS_File_Modification_Date'],infer_datetime_format=True)
df['FLS_File_Creation_Date']= pd.to_datetime(df['FLS_File_Creation_Date'],infer_datetime_format=True)
df['FLS_File_Access_Date']= pd.to_datetime(df['FLS_File_Access_Date'],infer_datetime_format=True)return df
ext ='txt'# regular expression
containsTxt=[]
path ='C:\myFolder'
df = findFilesInFolderYieldandGetDf(path, ext, containsTxt, subFolders =True)
Fast method using os.scandir in a recursive function. Searches for all files with a specified extension in folder and sub-folders.
import os
def findFilesInFolder(path, pathList, extension, subFolders = True):
""" Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
path: Base directory to find files
pathList: A list that stores all paths
extension: File extension to find
subFolders: Bool. If True, find files in all subfolders under path. If False, only searches files in the specified folder
"""
try: # Trapping a OSError: File permissions problem I believe
for entry in os.scandir(path):
if entry.is_file() and entry.path.endswith(extension):
pathList.append(entry.path)
elif entry.is_dir() and subFolders: # if its a directory, then repeat process as a nested function
pathList = findFilesInFolder(entry.path, pathList, extension, subFolders)
except OSError:
print('Cannot access ' + path +'. Probably a permissions error')
return pathList
dir_name = r'J:\myDirectory'
extension = ".txt"
pathList = []
pathList = findFilesInFolder(dir_name, pathList, extension, True)
Update April 2019
If you are searching over directories which contain 10,000s files, appending to a list becomes inefficient. ‘Yielding’ the results is a better solution. I have also included a function to convert the output to a Pandas Dataframe.
import os
import re
import pandas as pd
import numpy as np
def findFilesInFolderYield(path, extension, containsTxt='', subFolders = True, excludeText = ''):
""" Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
path: Base directory to find files
extension: File extension to find. e.g. 'txt'. Regular expression. Or 'ls\d' to match ls1, ls2, ls3 etc
containsTxt: List of Strings, only finds file if it contains this text. Ignore if '' (or blank)
subFolders: Bool. If True, find files in all subfolders under path. If False, only searches files in the specified folder
excludeText: Text string. Ignore if ''. Will exclude if text string is in path.
"""
if type(containsTxt) == str: # if a string and not in a list
containsTxt = [containsTxt]
myregexobj = re.compile('\.' + extension + '$') # Makes sure the file extension is at the end and is preceded by a .
try: # Trapping a OSError or FileNotFoundError: File permissions problem I believe
for entry in os.scandir(path):
if entry.is_file() and myregexobj.search(entry.path): #
bools = [True for txt in containsTxt if txt in entry.path and (excludeText == '' or excludeText not in entry.path)]
if len(bools)== len(containsTxt):
yield entry.stat().st_size, entry.stat().st_atime_ns, entry.stat().st_mtime_ns, entry.stat().st_ctime_ns, entry.path
elif entry.is_dir() and subFolders: # if its a directory, then repeat process as a nested function
yield from findFilesInFolderYield(entry.path, extension, containsTxt, subFolders)
except OSError as ose:
print('Cannot access ' + path +'. Probably a permissions error ', ose)
except FileNotFoundError as fnf:
print(path +' not found ', fnf)
def findFilesInFolderYieldandGetDf(path, extension, containsTxt, subFolders = True, excludeText = ''):
""" Converts returned data from findFilesInFolderYield and creates and Pandas Dataframe.
Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
path: Base directory to find files
extension: File extension to find. e.g. 'txt'. Regular expression. Or 'ls\d' to match ls1, ls2, ls3 etc
containsTxt: List of Strings, only finds file if it contains this text. Ignore if '' (or blank)
subFolders: Bool. If True, find files in all subfolders under path. If False, only searches files in the specified folder
excludeText: Text string. Ignore if ''. Will exclude if text string is in path.
"""
fileSizes, accessTimes, modificationTimes, creationTimes , paths = zip(*findFilesInFolderYield(path, extension, containsTxt, subFolders))
df = pd.DataFrame({
'FLS_File_Size':fileSizes,
'FLS_File_Access_Date':accessTimes,
'FLS_File_Modification_Date':np.array(modificationTimes).astype('timedelta64[ns]'),
'FLS_File_Creation_Date':creationTimes,
'FLS_File_PathName':paths,
})
df['FLS_File_Modification_Date'] = pd.to_datetime(df['FLS_File_Modification_Date'],infer_datetime_format=True)
df['FLS_File_Creation_Date'] = pd.to_datetime(df['FLS_File_Creation_Date'],infer_datetime_format=True)
df['FLS_File_Access_Date'] = pd.to_datetime(df['FLS_File_Access_Date'],infer_datetime_format=True)
return df
ext = 'txt' # regular expression
containsTxt=[]
path = 'C:\myFolder'
df = findFilesInFolderYieldandGetDf(path, ext, containsTxt, subFolders = True)
回答 10
Python具有执行此操作的所有工具:
import os
the_dir ='the_dir_that_want_to_search_in'
all_txt_files = filter(lambda x: x.endswith('.txt'), os.listdir(the_dir))
from os import listdir
from os.path import isfile, join
path ="/dataPath/"
onlyTxtFiles =[f for f in listdir(path)if isfile(join(path, f))and f.endswith(".txt")]print onlyTxtFiles
To get all ‘.txt’ file names inside ‘dataPath’ folder as a list in a Pythonic way:
from os import listdir
from os.path import isfile, join
path = "/dataPath/"
onlyTxtFiles = [f for f in listdir(path) if isfile(join(path, f)) and f.endswith(".txt")]
print onlyTxtFiles
回答 12
试试这个,这将递归找到所有文件:
import glob, os
os.chdir("H:\\wallpaper")# use whatever directory you want
#double\\ no single \for file in glob.glob("**/*.txt", recursive =True):print(file)
Try this this will find all your files recursively:
import glob, os
os.chdir("H:\\wallpaper")# use whatever directory you want
#double\\ no single \
for file in glob.glob("**/*.txt", recursive = True):
print(file)
回答 13
import os
import sys
if len(sys.argv)==2:print('no params')
sys.exit(1)
dir = sys.argv[1]
mask= sys.argv[2]
files = os.listdir(dir);
res = filter(lambda x: x.endswith(mask), files);print res
import os
import sys
if len(sys.argv)==2:
print('no params')
sys.exit(1)
dir = sys.argv[1]
mask= sys.argv[2]
files = os.listdir(dir);
res = filter(lambda x: x.endswith(mask), files);
print res
I did a test (Python 3.6.4, W7x64) to see which solution is the fastest for one folder, no subdirectories, to get a list of complete file paths for files with a specific extension.
To make it short, for this task os.listdir() is the fastest and is 1.7x as fast as the next best: os.walk() (with a break!), 2.7x as fast as pathlib, 3.2x faster than os.scandir() and 3.3x faster than glob.
Please keep in mind, that those results will change when you need recursive results. If you copy/paste one method below, please add a .lower() otherwise .EXT would not be found when searching for .ext.
import os
import pathlib
import timeit
import glob
def a():
path = pathlib.Path().cwd()
list_sqlite_files = [str(f) for f in path.glob("*.sqlite")]
def b():
path = os.getcwd()
list_sqlite_files = [f.path for f in os.scandir(path) if os.path.splitext(f)[1] == ".sqlite"]
def c():
path = os.getcwd()
list_sqlite_files = [os.path.join(path, f) for f in os.listdir(path) if f.endswith(".sqlite")]
def d():
path = os.getcwd()
os.chdir(path)
list_sqlite_files = [os.path.join(path, f) for f in glob.glob("*.sqlite")]
def e():
path = os.getcwd()
list_sqlite_files = [os.path.join(path, f) for f in glob.glob1(str(path), "*.sqlite")]
def f():
path = os.getcwd()
list_sqlite_files = []
for root, dirs, files in os.walk(path):
for file in files:
if file.endswith(".sqlite"):
list_sqlite_files.append( os.path.join(root, file) )
break
print(timeit.timeit(a, number=1000))
print(timeit.timeit(b, number=1000))
print(timeit.timeit(c, number=1000))
print(timeit.timeit(d, number=1000))
print(timeit.timeit(e, number=1000))
print(timeit.timeit(f, number=1000))
import os
fnames =([file for root, dirs, files in os.walk(dir)for file in files
if file.endswith('.txt')#or file.endswith('.png') or file.endswith('.pdf')])for fname in fnames:print(fname)
import os
fnames = ([file for root, dirs, files in os.walk(dir)
for file in files
if file.endswith('.txt') #or file.endswith('.png') or file.endswith('.pdf')
])
for fname in fnames: print(fname)
types = ('*.jpg', '*.png')
images_list = []
for files in types:
images_list.extend(glob.glob(os.path.join(path, files)))
回答 20
带有子目录的功能解决方案:
from fnmatch import filter
from functools import partial
from itertools import chain
from os import path, walk
print(*chain(*(map(partial(path.join, root), filter(filenames,"*.txt"))for root, _, filenames in walk("mydir"))))
from fnmatch import filter
from functools import partial
from itertools import chain
from os import path, walk
print(*chain(*(map(partial(path.join, root), filter(filenames, "*.txt")) for root, _, filenames in walk("mydir"))))
回答 21
如果文件夹包含很多文件或内存是一个限制,请考虑使用生成器:
def yield_files_with_extensions(folder_path, file_extension):for _, _, files in os.walk(folder_path):for file in files:if file.endswith(file_extension):yield file
选项A:重复
for f in yield_files_with_extensions('.','.txt'):print(f)
选项B:全部获取
files =[f for f in yield_files_with_extensions('.','.txt')]
In case the folder contains a lot of files or memory is an constraint, consider using generators:
def yield_files_with_extensions(folder_path, file_extension):
for _, _, files in os.walk(folder_path):
for file in files:
if file.endswith(file_extension):
yield file
Option A: Iterate
for f in yield_files_with_extensions('.', '.txt'):
print(f)
Option B: Get all
files = [f for f in yield_files_with_extensions('.', '.txt')]
回答 22
可复制的解决方案,类似于ghostdog之一:
def get_all_filepaths(root_path, ext):"""
Search all files which have a given extension within root_path.
This ignores the case of the extension and searches subdirectories, too.
Parameters
----------
root_path : str
ext : str
Returns
-------
list of str
Examples
--------
>>> get_all_filepaths('/run', '.lock')
['/run/unattended-upgrades.lock',
'/run/mlocate.daily.lock',
'/run/xtables.lock',
'/run/mysqld/mysqld.sock.lock',
'/run/postgresql/.s.PGSQL.5432.lock',
'/run/network/.ifstate.lock',
'/run/lock/asound.state.lock']
"""import os
all_files =[]for root, dirs, files in os.walk(root_path):for filename in files:if filename.lower().endswith(ext):
all_files.append(os.path.join(root, filename))return all_files
A copy-pastable solution similar to the one of ghostdog:
def get_all_filepaths(root_path, ext):
"""
Search all files which have a given extension within root_path.
This ignores the case of the extension and searches subdirectories, too.
Parameters
----------
root_path : str
ext : str
Returns
-------
list of str
Examples
--------
>>> get_all_filepaths('/run', '.lock')
['/run/unattended-upgrades.lock',
'/run/mlocate.daily.lock',
'/run/xtables.lock',
'/run/mysqld/mysqld.sock.lock',
'/run/postgresql/.s.PGSQL.5432.lock',
'/run/network/.ifstate.lock',
'/run/lock/asound.state.lock']
"""
import os
all_files = []
for root, dirs, files in os.walk(root_path):
for filename in files:
if filename.lower().endswith(ext):
all_files.append(os.path.join(root, filename))
return all_files
import os
# This is the path where you want to search
path = r'd:'# this is extension you want to detect
extension ='.txt'# this can be : .jpg .png .xls .log .....for root, dirs_list, files_list in os.walk(path):for file_name in files_list:if os.path.splitext(file_name)[-1]== extension:
file_name_path = os.path.join(root, file_name)print file_name
print file_name_path # This is the full path of the filter file
use Python OS module to find files with specific extension.
the simple example is here :
import os
# This is the path where you want to search
path = r'd:'
# this is extension you want to detect
extension = '.txt' # this can be : .jpg .png .xls .log .....
for root, dirs_list, files_list in os.walk(path):
for file_name in files_list:
if os.path.splitext(file_name)[-1] == extension:
file_name_path = os.path.join(root, file_name)
print file_name
print file_name_path # This is the full path of the filter file
回答 24
许多用户回答了os.walk答案,其中包括所有文件,还包括所有目录和子目录及其文件。
import os
def files_in_dir(path, extension=''):"""
Generator: yields all of the files in <path> ending with
<extension>
\param path Absolute or relative path to inspect,
\param extension [optional] Only yield files matching this,
\yield [filenames]
"""for _, dirs, files in os.walk(path):
dirs[:]=[]# do not recurse directories.yieldfrom[f for f in files if f.endswith(extension)]# Example: print all the .py files in './python'for filename in files_in_dir('./python','*.py'):print("-", filename)
或者在不需要生成器的情况下停下来:
path, ext ="./python", ext =".py"for _, _, dirfiles in os.walk(path):
matches =(f for f in dirfiles if f.endswith(ext))breakfor filename in matches:print("-", filename)
Many users have replied with os.walk answers, which includes all files but also all directories and subdirectories and their files.
import os
def files_in_dir(path, extension=''):
"""
Generator: yields all of the files in <path> ending with
<extension>
\param path Absolute or relative path to inspect,
\param extension [optional] Only yield files matching this,
\yield [filenames]
"""
for _, dirs, files in os.walk(path):
dirs[:] = [] # do not recurse directories.
yield from [f for f in files if f.endswith(extension)]
# Example: print all the .py files in './python'
for filename in files_in_dir('./python', '*.py'):
print("-", filename)
Or for a one off where you don’t need a generator:
path, ext = "./python", ext = ".py"
for _, _, dirfiles in os.walk(path):
matches = (f for f in dirfiles if f.endswith(ext))
break
for filename in matches:
print("-", filename)
If you are going to use matches for something else, you may want to make it a list rather than a generator expression:
matches = [f for f in dirfiles if f.endswith(ext)]
回答 25
使用forloop的简单方法:
import os
dir =["e","x","e"]
p = os.listdir('E:')#pathfor n in range(len(p)):
name = p[n]
myfile =[name[-3],name[-2],name[-1]]#for .txtif myfile == dir :print(name)else:print("nops")
import os
dir = ["e","x","e"]
p = os.listdir('E:') #path
for n in range(len(p)):
name = p[n]
myfile = [name[-3],name[-2],name[-1]] #for .txt
if myfile == dir :
print(name)
else:
print("nops")