jieba

西游记jieba分词统计

import jieba 排除非人名 excludes = {"一个","那里","怎么","我们","不知","和尚","妖精","两个","甚么","不是", "只见","国王","徒弟","呆子","如何","这个","大王","原来" ......
jieba

jieba分词

import jiebapath = "all.txt" # 读取文本文件file = open(path, "r", encoding="utf-8")text = file.read()file.close()words = jieba.lcut(text) # 使用jieba分词counts ......
jieba

jieba分词

print("徐嘉俊32") import jieba txt = open("E:\\大学\\Python\\西游记.txt","r",encoding='gb18030').read() words = jieba.lcut(txt) counts = {} for word in words: ......
jieba

jieba

import jieba path = "聊斋.txt" file = open(path, "r", encoding="utf-8") text = file.read() file.close() words = jieba.lcut(text) counts = {} for word in ......
jieba

jieba分词-红楼梦-次数前20

import jieba 读取文本文件 path = "红楼梦.txt" file = open(path, "r", encoding="utf-8") text = file.read() file.close() 使用jieba分词 words = jieba.lcut(text) 统计词频 ......
红楼 次数 jieba

jieba分词

path = "聊斋志异.txt" file = open(path, "r", encoding="utf-8") text = file.read() file.close() words = jieba.lcut(text) counts = {} for word in words: if ......
jieba

jieba库

```import jieba # 读取文本文件path = "红楼梦.txt"file = open(path, "r", encoding="utf-8")text = file.read()file.close() # 使用jieba分词words = jieba.lcut(text) # 统 ......
jieba

红楼梦jieba 分词

import jiebatxt = open("D:\pycharm\python123\jieba分词作业\红楼梦.txt","r",encoding = 'utf-8').read()words = jieba.lcut(txt) #精确模式进行分词count = {} #创建空字典for wo ......
红楼 jieba

西游记jieba分词

import jieba txt = open("西游记.txt", "r", encoding='utf-8').read() words = jieba.lcut(txt) # 使用精确模式对文本进行分词 counts = {} # 通过键值对的形式存储词语及其出现的次数 for word in ......
jieba

jieba 分词-红楼梦

import jiebaexcludes = {"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己", "一面","只见","怎么"," ......
红楼 jieba

jieba 统计红楼梦人物出现次数

import jiebawith open('红楼梦.txt','r',encoding='utf-8')as f: txt = f.read() words = jieba.lcut(txt) counts={} for word in words: if len(word) == 1: cont ......
红楼 次数 人物 jieba

jieba 分词

jieba分词 import jieba with open("C:\\Users\\86133\\Desktop\\liaozhai.txt", "r", encoding='utf_8') as f: words = jieba.lcut(f.read()) counts = {} for wo ......
jieba

聊斋jieba库

import jieba print("02 17向悦")# 读取文本文件path = "聊斋志异.txt"file = open(path, "r", encoding="utf-8")text = file.read()file.close() # 使用jieba分词words = jieba. ......
jieba

红楼梦jieba分词

import jieba excludes = {"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己", "一面","只见","怎么",&quo ......
红楼 jieba

jieba分词 红楼梦相关分词

import jieba text=open('C:\Users\李文卓\Desktop\xn\红楼梦.txt',"r",encoding='utf-8').read() words=jieba.lcut(text) counts={} for word in words: if len(word) ......
红楼 jieba

jieba分词

```import jieba # 读取文本文件 path = "西游记.txt" file = open(path, "r", encoding="utf-8") text = file.read() file.close() # 使用jieba分词 words = jieba.lcut(text ......
jieba

西游记jieba分词

引用jiaba库 点击查看代码 import jieba 读取文件,文件路径填写文件放取的位置 并且使用jieba分词的精确模式 点击查看代码 txt = open('西游记.txt', 'r', encoding='utf-8').read() words = jieba.lcut(txt) co ......
jieba

jieba分词

jieba分词尾号为1,2,3的同学做,西游记相关的分词,出现次数最高的20个。 代码如下: print("学号后两位为03(2021310143103)") import jieba txt = open("E:\\大学\\Python\\西游记.txt","r",encoding='gb1803 ......
jieba

jieba西游记

import jiebawith open('E:\西游记.txt','r',encoding='utf-8')as f: # 打开文件 txt = f.read() # 读取为txt words = jieba.lcut(txt) # 利用jieba库的lcut分词 counts={} # 创建字 ......
jieba

jieba分词 | 西游记相关分词,出现次数最高的20个。

代码 1 import jieba 2 3 txt = open("《西游记》.txt", "r", encoding='utf-8').read() 4 5 words = jieba.lcut(txt) # 使用精确模式对文本进行分词 6 7 counts = {} # 通过键值对的形式存储词语 ......
次数 jieba

python作业 jieba 分词

尾号为7,8,9,0的同学做,聊斋相关的分词,出现次数最高的20个。学号:2022310143007 ```import jieba # 读取文本文件path = "聊斋志异.txt"file = open(path, "r", encoding="utf-8")text = file.read() ......
python jieba

jieba 分词

尾号为7,8,9,0的同学做,聊斋相关的分词,出现次数最高的20个。 # -*- coding: utf-8 -*- """ Created on Sat Dec 23 18:00:49 2023 @author: 86135 """ import jieba # 读取文本文件 path = "C: ......
jieba

jieba分词

import jieba txt = open("D:\python-learn\lianxi\聊斋志异.txt","r",encoding = 'utf-8').read() words = jieba.lcut(txt) counts = {} for word in words: if len ......
jieba

jieba分词

import jieba txt = open("D:\python-learn\lianxi\聊斋志异.txt","r",encoding = 'utf-8').read() words = jieba.lcut(txt) counts = {} for word in words: if len ......
jieba

jieba分词

import jieba# 读取文本文件path = "红楼梦.txt"file = open(path, "r", encoding="GB2312",errors="ignore")text = file.read()file.close()# 使用jieba分词words = jieba.lc ......
jieba

jieba分词

import jiebatxt = open("D:\\python\\西游记.txt", "r", encoding='ansi').read()words = jieba.lcut(txt) # 使用精确模式对文本进行分词counts = {} # 通过键值对的形式存储词语及其出现的次数for ......
jieba

最后一次大作业-2-jieba分词

尾号为4,5,6的同学做,红楼梦相关的分词,出现次数最高的20个。 import jieba txt = open("D:\pycharm\python123\jieba分词作业\红楼梦.txt","r",encoding = 'utf-8').read() words = jieba.lcut(t ......
jieba

jieba 分词

jieba 分词 ‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‮‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‮‬‪‬‪‬‪‬‪‬‪‬‮‬‭‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‮‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬ 描述 尾号为1,2,3的同学做,西游记相关的分词,出现次 ......
jieba

jieba分词《聊斋》

import jieba txt = open("聊斋志异白话简写版.txt", "r", encoding='utf-8').read()words = jieba.lcut(txt) # 使用精确模式对文本进行分词counts = {} # 通过键值对的形式存储词语及其出现的次数 for wor ......
jieba

jieba 分词

import jieba txt = open("西游记.txt", "r", encoding='utf-8').read() words = jieba.lcut(txt) # 使用精确模式对文本进行分词 counts = {} # 通过键值对的形式存储词语及其出现的次数 for word in ......
jieba