import requests from time import time url_list = [ "https://via.placeholder.com/400", "https://via.placeholder.com/410", "https://via.placeholder.com/420", "https://via.placeholder.com/430", "https://via.placeholder.com/440", "https://via.placeholder.com/450", "https://via.placeholder.com/460", "https://via.placeholder.com/470", "https://via.placeholder.com/480", "https://via.placeholder.com/490", "https://via.placeholder.com/500", "https://via.placeholder.com/510", "https://via.placeholder.com/520", "https://via.placeholder.com/530", ] def download_file(url): html = requests.get(url, stream=True) return html.status_code start = time() for url in url_list: print(download_file(url)) print(f'Time taken: {time() - start}')
Output:
<--truncated--> Time taken: 4.128157138824463
这是一个理智的示例,代码将打开每个URL,等待它加载,打印其状态代码,然后转到下一个URL。这种代码非常适合多线程。
现代系统可以运行大量线程,这意味着您可以使用非常低的开销一次完成多个任务。为什么我们不尝试使用它来使上述代码更快地处理这些URL?
我们将利用ThreadPoolExecutor从concurrent.futures库。它非常易于使用。让我向您展示一些代码,然后解释它是如何工作的。
import requests from concurrent.futures import ThreadPoolExecutor, as_completed from time import time url_list = [ "https://via.placeholder.com/400", "https://via.placeholder.com/410", "https://via.placeholder.com/420", "https://via.placeholder.com/430", "https://via.placeholder.com/440", "https://via.placeholder.com/450", "https://via.placeholder.com/460", "https://via.placeholder.com/470", "https://via.placeholder.com/480", "https://via.placeholder.com/490", "https://via.placeholder.com/500", "https://via.placeholder.com/510", "https://via.placeholder.com/520", "https://via.placeholder.com/530", ] def download_file(url): html = requests.get(url, stream=True) return html.status_code start = time() processes = [] with ThreadPoolExecutor(max_workers=10) as executor: for url in url_list: processes.append(executor.submit(download_file, url)) for task in as_completed(processes): print(task.result()) print(f'Time taken: {time() - start}')
Output:
<--truncated--> Time taken: 0.4583399295806885
我们的代码加速了近9倍!我们甚至没有做任何超级参与。如果有更多网址,性能优势会更高。
那么发生了什么?当我们调用时,executor.submit 我们正在向线程池添加新任务。我们将该任务存储在进程列表中。稍后我们迭代过程并打印出结果。
该as_completed方法在完成后立即从进程列表中生成项(任务)。任务可以进入完成状态有两个原因。它已完成执行或已取消。我们也可以传入一个timeout参数as_completed,如果任务花费的时间超过了那个时间段,那么as_completed就会产生这个任务。
您应该多探索多线程。对于琐碎的项目,它是加快代码速度的最快方法。如果你想学习,请阅读官方文档https://docs.python.org/3/library/concurrent.futures.html,非常有帮助.