我有一个要并行化的函数。import multiprocessing as mpfrom pathos.multiprocessing import ProcessingPool as Poolcores=mp.cpu_count()# create the multiprocessing poolpool = Pool(cores)def clean_preprocess(text): """ Given a string of text, the function: 1. Remove all punctuations and numbers and converts texts to lower case 2. Handles negation words defined above. 3. Tokenies words that are of more than length 1 """ cores=mp.cpu_count() pool = Pool(cores) lower = re.sub(r'[^a-zA-Z\s\']', "", text).lower() lower_neg_handled = n_pattern.sub(lambda x: n_dict[x.group()], lower) letters_only = re.sub(r'[^a-zA-Z\s]', "", lower_neg_handled) words = [i for i in tok.tokenize(letters_only) if len(i) > 1] ##parallelize this? return (' '.join(words))我一直在阅读有关多处理的文档,但对如何适当地并行化我的函数仍然有些困惑。如果有人能指出我在并行化像我这样的功能的正确方向,我将不胜感激。
1 回答

catspeake
TA贡献1111条经验 获得超0个赞
在您的函数中,您可以决定通过将文本拆分为子部分来并行化,将标记化应用于子部分,然后连接结果。
沿线的东西:
text0 = text[:len(text)/2]
text1 = text[len(text)/2:]
然后将您的处理应用于这两部分,使用:
# here, I suppose that clean_preprocess is the sequential version,
# and we manage the pool outside of it
with Pool(2) as p:
words0, words1 = pool.map(clean_preprocess, [text0, text1])
words = words1 + words2
# or continue with words0 words1 to save the cost of joining the lists
然而,你的函数似乎受内存限制,所以它不会有一个可怕的加速(通常,因子 2 是我们现在在标准计算机上希望的最大值),请参阅例如,如果程序是内存,并行化对性能有多大帮助-边界?或者术语“CPU 绑定”和“I/O 绑定”是什么意思?
因此,您可以尝试将文本分成 2 个以上的部分,但可能不会变得更快。您甚至可能会得到令人失望的性能,因为拆分文本可能比处理文本更昂贵。
添加回答
举报
0/150
提交
取消