提问者:小点点

Python词干分析(使用数据帧)


我创建了一个数据框,其中包含要被词干化的句子。我想用雪球机来获得更高的分类算法精度。我该如何实现这一点?

import pandas as pd
from nltk.stem.snowball import SnowballStemmer

# Use English stemmer.
stemmer = SnowballStemmer("english")

# Sentences to be stemmed.
data = ["programmers program with programming languages", "my code is working so there must be a bug in the interpreter"] 
    
# Create the Pandas dataFrame.
df = pd.DataFrame(data, columns = ['unstemmed']) 

# Split the sentences to lists of words.
df['unstemmed'] = df['unstemmed'].str.split()

# Make sure we see the full column.
pd.set_option('display.max_colwidth', -1)

# Print dataframe.
df 

+----+---------------------------------------------------------------+
|    | unstemmed                                                     |
|----+---------------------------------------------------------------|
|  0 | ['programmers', 'program', 'with', 'programming', 'languages']|
|  1 | ['my', 'code', 'is', 'working', 'so', 'there', 'must',        |  
|    |  'be', 'a', 'bug', 'in', 'the', 'interpreter']                |
+----+---------------------------------------------------------------+

共1个答案

匿名用户

您必须对每个单词应用词干分析,并将其存储到“词干分析”列中。

df['stemmed'] = df['unstemmed'].apply(lambda x: [stemmer.stem(y) for y in x]) # Stem every word.
df = df.drop(columns=['unstemmed']) # Get rid of the unstemmed column.
df # Print dataframe.

+----+--------------------------------------------------------------+
|    | stemmed                                                      |
|----+--------------------------------------------------------------|
|  0 | ['program', 'program', 'with', 'program', 'languag']         |
|  1 | ['my', 'code', 'is', 'work', 'so', 'there', 'must',          |   
|    |  'be', 'a', 'bug', 'in', 'the', 'interpret']                 |
+----+--------------------------------------------------------------+