有一道Python面试题, 以下代码有什么局限性,要如何修改
1
2
3
4
5
|
def strTest(num): s = 'Hello' for i in range (num): s + = 'x' return s |
上面的代码其实可以看出:由于变量str是不变对象,每次遍历,Python都会生成新的str对象来存储新的字符串,所以num越大,创建的str对象就越多,内存消耗约大,速度越慢,性能越差。 如果要改变上面的问题,可以变字符串拼接为join联合的方式,代码如下:
1
2
3
4
5
6
|
def strTest2(num): s = 'Hello' l = list (s) for i in range (num): l.append( 'x' ) return ''.join(l) |
下面两种不同处理方式,运行速度的比较:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
|
>>> def strTest1(num): ... s = 'Hello' ... for i in range (num): ... s + = 'x' ... return s >>> def strTest2(num): ... s = 'Hello' ... l = list (s) ... for i in range (num): ... l.append(s) ... return ''.join(l) >>> >>> from timeit import timeit # 运行10万级别数据,运行速度比对 >>> timeit( "strTest1(100000)" , setup = "from __main__ import strTest1" , number = 1 ) 0.016680980406363233 >>> timeit( "strTest2(100000)" , setup = "from __main__ import strTest2" , number = 1 ) 0.009688869110618725 # 运行100万级别数据,运行速度比对 >>> timeit( "strTest1(1000000)" , setup = "from __main__ import strTest1" , number = 1 ) 0.14558920607187195 >>> timeit( "strTest2(1000000)" , setup = "from __main__ import strTest2" , number = 1 ) 0.1335057276853462 # 运行1000万级别数据,运行速度比对 >>> timeit( "strTest1(10000000)" , setup = "from __main__ import strTest1" , number = 1 ) 5.9497953107860475 >>> timeit( "strTest2(10000000)" , setup = "from __main__ import strTest2" , number = 1 ) 1.3268972136649921 # 运行2000万级别数据,运行速度比对 >>> timeit( "strTest1(20000000)" , setup = "from __main__ import strTest1" , number = 1 ) 21.661270140499056 >>> timeit( "strTest2(20000000)" , setup = "from __main__ import strTest2" , number = 1 ) 2.6981786518920217 # 运行3000万级别数据,运行速度比对 >>> timeit( "strTest1(30000000)" , setup = "from __main__ import strTest1" , number = 1 ) 49.858089123966295 >>> timeit( "strTest2(30000000)" , setup = "from __main__ import strTest2" , number = 1 ) 4.285787770209481 # 运行4000万级别数据,运行速度比对 >>> timeit( "strTest1(40000000)" , setup = "from __main__ import strTest1" , number = 1 ) 86.67876273457563 >>> timeit( "strTest2(40000000)" , setup = "from __main__ import strTest2" , number = 1 ) 5.328653452047092 # 运行5000万级别数据,运行速度比对 >>> timeit( "strTest1(50000000)" , setup = "from __main__ import strTest1" , number = 1 ) 130.59138063819023 >>> timeit( "strTest2(50000000)" , setup = "from __main__ import strTest2" , number = 1 ) 6.8375931077291625 # 运行6000万级别数据,运行速度比对 >>> timeit( "strTest1(60000000)" , setup = "from __main__ import strTest1" , number = 1 ) 188.28227241975003 >>> timeit( "strTest2(60000000)" , setup = "from __main__ import strTest2" , number = 1 ) 8.080144489401846 # 运行7000万级别数据,运行速度比对 >>> timeit( "strTest1(70000000)" , setup = "from __main__ import strTest1" , number = 1 ) 256.54383904350277 >>> timeit( "strTest2(70000000)" , setup = "from __main__ import strTest2" , number = 1 ) 9.387400816458012 # 运行8000万级别数据,运行速度比对 >>> timeit( "strTest1(80000000)" , setup = "from __main__ import strTest1" , number = 1 ) 333.7185806572388 >>> timeit( "strTest2(80000000)" , setup = "from __main__ import strTest2" , number = 1 ) 10.946627677462857 |
从上面的比对数据可以看出,当数据比较小的时候,两者差别不大,当数据越大,两者性能差距就越大。从而可以看出,字符串拼接的方式一旦碰到大数据处理的时候,性能是非常慢的。
总结
以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作具有一定的参考学习价值,谢谢大家对服务器之家的支持。如果你想了解更多相关内容请查看下面相关链接
原文链接:https://blog.csdn.net/Jerry_1126/article/details/86584936