Whenever you want to quickly bombard a URL with some concurrent traffic, you can use this:
importrandomimporttimeimportrequestsimportconcurrent.futuresdef_get_size(url):sleep=random.random()/10# print("sleep", sleep)time.sleep(sleep)r=requests.get(url)# print(r.status_code)assertlen(r.text)returnlen(r.text)defrun(url,times=10):sizes=[]futures=[]withconcurrent.futures.ThreadPoolExecutor()asexecutor:for_inrange(times):futures.append(executor.submit(_get_size,url))forfutureinconcurrent.futures.as_completed(futures):sizes.append(future.result())returnsizesif__name__=="__main__":importsysprint(run(sys.argv[1]))
It's really basic but it works wonderfully. It starts 10 concurrent threads that all hit the same URL at almost the same time.
I've been using this stress test a local Django server to test some atomicity writes with the file system.