spark rdd saveAsTextFile保存为文件
sc.parallelize(["one","two","two","three","three","three"]).map(lambda x:(x,1)).repartition(1).saveAsTextFile("feature/all.txt")load方法:a=sc.textFile("feature/all.txt")a.collect()[u"('one',1)",u"('two',1)",u"('two',1)",u...