前端开发笔记[6]-基于Gradio的力声信号分析界面

发布时间 2024-01-10 08:28:30作者: qsBye

摘要

基于Gradio的力声信号分析界面,实现Gradio页面内嵌html代码和svg图像;

备注

仅用作前端开发练习,不保证其中的信号处理方式及数据分析的学术性。

开源地址

[https://gitee.com/qsbye/pear-dsp-gradio]

平台信息

  • "pyaudio~=0.2.14",
  • "requests~=2.31.0",
  • "psutil~=5.9.6",
  • "gradio~=4.8.0",
  • "librosa~=0.10.1",
  • "scipy~=1.10.1",
  • "Pillow~=10.1.0",
  • "fastapi~=0.105.0",
  • "uvicorn~=0.24.0.post1",
  • "jinja2~=3.1.2",
  • "pywavelets~=1.4.1",
  • "EMD-signal~=1.5.2",

原理简介

力声信号

力声信号即力作用物体下同步产生的声音信号,反映物体材料特征。参考文献提到,在对食品脆度进行力学测试时,所获得的力学响应曲线反映 了食品材料组织内部受载时的破裂和声波的传导,其力变化的某些特征 参数充分反映了食品的脆度,食品破裂过程所产生的力学和声音信号中都 包含大量能表征脆度的信息。

Gradio加载javascript文件

[https://www.volcengine.com/theme/4302487-G-7-1]
[https://blog.csdn.net/junchen1992/article/details/130808585]
[https://www.gradio.app/docs/html]
[https://github.com/gradio-app/gradio/issues/3446]
gradio的HTML组件无法直接运行JavaScript代码,因为会被转义.
需要使用Gradio的预定义回调来处理组件的事件,从而使JavaScript代码得到执行。下面是一个示例代码:
In 4.0 this parameter is named js with no underscore.

html = '''
<button id="my_btn" onclick="click_text()">Hello</button>
'''

script = '''
function test(){
    let script = document.createElement('script');
    script.innerHTML = "function click_text(){alert('click')}";
    document.head.appendChild(script);   
}
'''
with gr.Blocks() as block:
    gr.HTML(html)
    block.load(_js = script)
    
block.launch(debug=True, 
            share=False)

block.close()

在这个示例中,我们注册了一个名为process_html的回调函数,它将在HTML输入改变时被调用。在这个函数中,我们可以对HTML进行任何必要的处理,然后返回处理后的结果。

这样做可以确保Gradio的JavaScript代码得到执行,并使用户能够与HTML组件进行交互。

Gradio资源引用路径

[https://github.com/gradio-app/gradio/issues/884]
[https://github.com/gradio-app/gradio/issues/3879]
[https://github.com/gradio-app/gradio/security/advisories/GHSA-rhq2-3vr9-6mcr]
[https://github.com/gradio-app/gradio/security/advisories/GHSA-3qqg-pgqq-3695]
[https://github.com/gradio-app/gradio/pull/4406]
There are two separate security vulnerabilities here:

  • (1) a security vulnerability that allows users to read arbitrary files on the machines that are running shared Gradio apps
  • (2) the ability of users to use machines that are sharing Gradio apps to proxy arbitrary URLs.

Hi @Olvi73, you can access relative files by adding the prefix file/ to the path.

So if the code to your Gradio app is in a file called app.py, and you have an image called lion.jpg in the same directory:
---- app.py
---- lion.jpg

Then the code in app.py would look like:

import gradio as gr

title = "test"

def inference(text):
    html = (
            "<div >"
            "<img  src='file/lion.jpg' alt='image One'>"
            + "</div>"
    )
    return html

gr.Interface(
    inference,
    gr.inputs.Textbox(placeholder="Enter sentence here..."),
    outputs=["html"],
    title=title,
    allow_flagging="never",

).launch(enable_queue=True)

Note that for security reasons, any file that is included in your app must be in the same directory or in a child directory of app.py.
出于安全考虑,任何包含在您的应用程序中的文件必须位于与app.py相同的目录或其子目录中。

实现

核心代码

import random
import gradio # 服务端
from fastapi import FastAPI, Request # 服务端
from fastapi.responses import HTMLResponse # 服务端
from fastapi.staticfiles import StaticFiles # 服务端
from fastapi.templating import Jinja2Templates # 服务端
import uvicorn
import threading # 多线程
import numpy as np
import matplotlib.pyplot as plt # 绘图
from mpl_toolkits.mplot3d import Axes3D # 绘图
from scipy.signal import stft # 信号处理
from PyEMD import EMD # 信号处理
import scipy
import pywt # 小波变换
import librosa
import requests # 网络请求
from PIL import Image as PILImage # 图像处理
import pandas as pd # 数据处理
import io
import urllib # 网络请求
import hmac # 网络请求
import wave # 音频
import base64
import time # 时间
import os
import json
from io import BytesIO # 文件二进制流
import zhipuai # LLM大模型

'''start 全局变量'''
# 创建FastAPI应用
app = FastAPI()
# 网页模板
templates = Jinja2Templates(directory="src/pear_dsp_gnuradio_gradio/templates")
app.mount("/static", StaticFiles(directory="src/pear_dsp_gnuradio_gradio/static"), name="static")

# csv文件路径
csv_path = "src/pear_dsp_gnuradio_gradio/dataset"
csv_files = [os.path.join(csv_path, f) for f in os.listdir(csv_path) if f.endswith('.csv')]
csv_examples = [csv_file for csv_file in csv_files]

# 选中的文件
g_file_selected = csv_examples[0]

# 文件保存目录
file_output_path = "src/pear_dsp_gnuradio_gradio/outputs"

# 读取svg文件中的数据
with open("src/pear_dsp_gnuradio_gradio/static/pear-bg.svg", 'r', encoding='utf-8') as f:
    pear_svg_content = f.read()
# 将SVG内容编码为base64
pear_svg_base64 = base64.b64encode(pear_svg_content.encode()).decode()
# 优化gradio页面样式
gradio_css = '''body { 
    margin:0; 
    padding: 0; 
    display: flex; 
    justify-content: center; 
    align-items: center; 
    height: 100vh; 
    background-image: url(data:image/svg+xml;base64,{pear_svg_base64}); 
    background-repeat: repeat; 
    }
    .btn1{
    background-color:yellow !important;
    color:blue;
    }
    #args_selector_button {
      position: fixed;
      bottom: 20px;
      right: 20px;
      background-color: pink;
      color: white;
      border: none;
      border-radius: 50%;
      padding: 15px;
      box-shadow: 0px 3px 5px rgba(0, 0, 0, 0.2);
      cursor: pointer;
      font-size: 16px;
      text-decoration: none;
    }'''.replace("{pear_svg_base64}",pear_svg_base64)

# 存储计算出的参数值
g_dataframe1_data: list = [""] * 30 # 力信号
g_dataframe2_data: list = [""] * 30 # 声信号

# 表格值
g_dataframe2_value = [["声学信号尖峰数量", "N/A", "个"],
                        ["最大幅值与最小幅值之差", "N/A", "单位"],
                        ["声学曲线下面积", "N/A", "单位*s"],
                        ["声学信号最大幅值", "N/A", "单位"],
                        ["声学信号平均幅值", "N/A", "单位"],
                        ["声学信号峰值的平均值", "N/A", "单位"],
                        ["幅值标准偏差", "N/A", "单位"],
                        ["声学曲线长度", "N/A", "s"],
                        ["声学信号幅值平方和", "N/A", "单位^2"],
                        ["声学信号功率谱的平均幅值", "N/A", "无单位"],
                        ["声学Richardson表观分形维数", "N/A", "无单位"],
                        ["声学Kolmogorov表观分形维数", "N/A", "无单位"],
                        ["声学Minkowski表观分形维数", "N/A", "无单位"],
                        ["声学Korcak表观分形维数", "N/A", "无单位"],
                        ["声信号滤波前后信噪比","N/A","无单位"],
                        ["声信号时频分辨率","N/A","无单位"]
                        ]
g_dataframe1_value = [
                        ["第一个力峰与起点的频率", "N/A", "Hz"],
                        ["最大力与起点的频率", "N/A", "Hz"],
                        ["第一个力峰与力谷之差", "N/A", "N"],
                        ["第一个力谷处的力值", "N/A", "N"],
                        ["力学曲线力峰的数量", "N/A", "个"],
                        ["屈服力与终点力之差", "N/A", "N"],
                        ["曲线与横坐标围成的面积", "N/A", "N*mm"],
                        ["力学曲线的长度", "N/A", "mm"],
                        ["力的平均值", "N/A", "N"],
                        ["力峰与力谷之差的平均值", "N/A", "N"],
                        ["第一个力峰处的力值", "N/A", "N"],
                        ["杨氏模量", "N/A", "GPa"],
                        ["位移8mm对应的力值", "N/A", "N"],
                        ["力学曲线的最大力峰值", "N/A", "N"],
                        ["屈服力与终点力之比", "N/A", "无单位"],
                        ["力学曲线力峰的平均值", "N/A", "N"],
                        ["力学信号功率谱的平均幅值", "N/A", "无单位"],
                        ["力学Richardson表观分形维数", "N/A", "无单位"],
                        ["力学Kolmogorov表观分形维数", "N/A", "无单位"],
                        ["力学Minkowski表观分形维数", "N/A", "无单位"],
                        ["力学Korcak表观分形维数", "N/A", "无单位"]
                    ]
'''end 全局变量'''

'''start 自定义类'''
pass
'''end 自定义类'''

'''start 预处理I'''
# 从处理后数据中获取力信号和声信号的坐标点
force_signal:list  # 使用y坐标的值作为力信号的幅值
sound_signal:list  # 使用y坐标的值作为声信号的幅值

# 采样率和信号时长(需要根据实际情况进行设置)
sample_rate_sound = 40000  # 采样率,例如44100Hz
sample_rate_force = 40000 # 500Hz

# 计算频谱图的时间分辨率和频率分辨率
nperseg_force = int(sample_rate_force * 0.05)  # 每段的样本数,这里设置为0.05秒
noverlap_force = int(sample_rate_force * 0.025)  # 段之间的重叠样本数,这里设置为0.025秒
nperseg_sound = int(sample_rate_sound * 0.05)  # 每段的样本数,这里设置为0.05秒
noverlap_sound = int(sample_rate_sound * 0.025)  # 段之间的重叠样本数,这里设置为0.025秒
'''end 预处理I'''

'''start 自定义函数'''
@app.get("/",response_class=HTMLResponse)
async def root(request: Request):
    '''
    不能修改名称和内容
    '''
    return templates.TemplateResponse("index.html", {"request": request})

def run_fastapi():
    '''
    运行fastapi网页
    '''
    uvicorn.run(app, host="127.0.0.1", port=8000)

def run_gradio():
    '''
    运行gradio网页
    '''
    webui.launch()

def hello():
    '''
    启动fastapi
    '''
    run_fastapi()
    return "Hello from weekend-webui-gradio!"

def fn_preview_csv(csv_file):
    '''
    接受上传的csv文件并进行预处理
    '''
    # 更新全局变量
    global g_file_selected 
    g_file_selected = csv_file

    # 转换为音频信号
    wav = fn_convert_csv_to_wav(csv_file)

    # 同时显示力信号和声信号
    img1 = create_original_line_plot(csv_file)

    return wav

def create_original_line_plot(csv_file):
    '''
    创建一个绘制原数据折线图的函数,并返回图像字节流
    '''
    # 读取CSV文件
    data = pd.read_csv(csv_file, names=["力信号", "声信号"])
    global force_signal,sound_signal

    # 提取力信号和声信号列,并转换为NumPy数组
    force_signal = data["力信号"].values.astype(np.float32)
    sound_signal = data["声信号"].values.astype(np.float32)
    
    # 压缩数据范围
    force_signal = np.interp(force_signal, (force_signal.min(), force_signal.max()), (0, sound_signal.max()))

    # 生成时间轴数据(4s,采样率为40kHz)
    time = np.linspace(0, 4, 4 * 40000, endpoint=False)
    
    # 截取有效数据段(去掉前0.5s的空行程)
    valid_time = time[0:]
    valid_force = force_signal[0:]
    valid_sound = sound_signal[0:]
    
    # 创建折线图
    fig, ax = plt.subplots()
    ax.plot(valid_time, valid_sound, color='red', label='Sound Signal')
    ax.plot(valid_time, valid_force, color='blue', label='Force Signal')
    ax.set_title("Sound & Force Signal")
    ax.legend()
    ax.set_xlabel("Time (s)")
    ax.set_ylabel("Amplitude")
    ax.set_xlim(right=4.0)  # 设置x轴的最大坐标为4.0s
    ax.autoscale(enable=True, axis='y')  # 自动调整y轴范围
    fig.tight_layout()  # 调整子图间距离
    
    # 将图像保存为字节流
    buf = BytesIO()
    plt.savefig(buf, format="png")
    buf.seek(0)
    plt.close(fig)  # 关闭图像,避免在内存中积累过多的图像对象
    return PILImage.open(buf)

def create_original_line_plot2(csv_file):
    '''
    创建一个绘制原数据(带噪声)折线图的函数,并返回图像字节流
    '''
    # 读取CSV文件
    data = pd.read_csv(csv_file, names=["力信号", "声信号"])
    
    # 提取力信号和声信号列,并转换为NumPy数组
    force_signal = data["力信号"].values.astype(np.float32)
    sound_signal = data["声信号"].values.astype(np.float32)
    
    # 压缩数据范围
    force_signal = np.interp(force_signal, (force_signal.min(), force_signal.max()), (0, sound_signal.max()))

    # 生成时间轴数据(4s,采样率为40kHz)
    time = np.linspace(0, 4, 4 * 40000, endpoint=False)
    
    # 截取有效数据段(去掉前0.5s的空行程)
    valid_time = time[0:]
    valid_sound = sound_signal[0:]
    
    # 创建折线图
    fig, ax = plt.subplots()
    ax.plot(valid_time, valid_sound, color='red', label='Sound Signal')
    ax.set_title("Sound Signal")
    ax.legend()
    ax.set_xlabel("Time (s)")
    ax.set_ylabel("Amplitude")
    ax.set_xlim(right=4.0)  # 设置x轴的最大坐标为4.0s
    ax.autoscale(enable=True, axis='y')  # 自动调整y轴范围
    fig.tight_layout()  # 调整子图间距离
    
    # 将图像保存为字节流
    buf = BytesIO()
    plt.savefig(buf, format="png")
    buf.seek(0)
    plt.close(fig)  # 关闭图像,避免在内存中积累过多的图像对象
    return PILImage.open(buf)

def fn_plot_dwt_spectrum(signal, sampling_rate,name="sound",wavelet="db4")->gradio.Image:
    '''
    绘制DWT频谱图函数
    haar family: haar
    db family: db1, db2, db3, db4, db5, db6, db7, db8, db9, db10, db11, db12, db13, db14, db15, db16, db17, db18, db19, db20, db21, db22, db23, db24, db25, db26, db27, db28, db29, db30, db31, db32, db33, db34, db35, db36, db37, db38
    sym family: sym2, sym3, sym4, sym5, sym6, sym7, sym8, sym9, sym10, sym11, sym12, sym13, sym14, sym15, sym16, sym17, sym18, sym19, sym20
    coif family: coif1, coif2, coif3, coif4, coif5, coif6, coif7, coif8, coif9, coif10, coif11, coif12, coif13, coif14, coif15, coif16, coif17
    bior family: bior1.1, bior1.3, bior1.5, bior2.2, bior2.4, bior2.6, bior2.8, bior3.1, bior3.3, bior3.5, bior3.7, bior3.9, bior4.4, bior5.5, bior6.8
    rbio family: rbio1.1, rbio1.3, rbio1.5, rbio2.2, rbio2.4, rbio2.6, rbio2.8, rbio3.1, rbio3.3, rbio3.5, rbio3.7, rbio3.9, rbio4.4, rbio5.5, rbio6.8
    dmey family: dmey
    gaus family: gaus1, gaus2, gaus3, gaus4, gaus5, gaus6, gaus7, gaus8
    mexh family: mexh
    morl family: morl
    cgau family: cgau1, cgau2, cgau3, cgau4, cgau5, cgau6, cgau7, cgau8
    shan family: shan
    fbsp family: fbsp
    cmor family: cmor
    '''

    # 进行小波分解
    coeffs = pywt.wavedec(signal, wavelet=wavelet)
    # 使用小波分解的系数重构信号
    reconstructed_signal = pywt.waverec(coeffs, wavelet=wavelet)
    
    # 绘制频谱图
    frequencies = np.fft.rfftfreq(len(reconstructed_signal), 1/sampling_rate)
    spectrum = np.abs(np.fft.rfft(reconstructed_signal))
    plt.plot(frequencies, spectrum)
    plt.xlabel('Frequency (Hz)')
    plt.ylabel('Amplitude')
    plt.title(f'{wavelet} DWT Spectrum of the {name} Signal')
    #plt.show()
    # 将图像保存为字节流
    img_buf = BytesIO()
    plt.savefig(img_buf, format="png")
    img_buf.seek(0)
    plt.close()  # 关闭图像,避免在内存中积累过多的图像对象
    return PILImage.open(img_buf)

def fn_apply_args(arg1,arg2,arg3,arg4,arg5,arg6,arg7,arg8,arg9,arg10,arg11,arg12,arg13,arg14,arg15,img1,img2,img3,img4,img5,img6,text1,text2,dataframe1,dataframe2)->(gradio.Image,gradio.Image,gradio.Image,gradio.Image,gradio.Image,...,gradio.Textbox,gradio.Textbox,gradio.Dataframe,gradio.Dataframe):
    '''
    应用参数并处理
    1. 去噪
    2. 进行变换
    3. 显示图片
    '''
    # 获取参数值
    args_tuple = (None,(None,arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9, arg10, arg11, arg12,arg13,arg14,arg15),(None,img1,img2,img3,img4,img5,img6),(None,text1,text2),(None,dataframe1,dataframe2))
    print(args_tuple[1][1])

    # csv预览图片更新
    _img5 = create_original_line_plot(g_file_selected)
    _img6 = create_original_line_plot2(g_file_selected)
    _img3 = _img6

    # 根据选择的处理方式进行相应的变换和去噪
    if args_tuple[1][1] == "STFT":
        _img1,_img2=fn_stft_handler(args_tuple)
        _img4=fn_noise_handler(args_tuple)
        _text1 = format_force_results()
        # TODO:使用LLM自动分析结果
        _text2 = format_sound_results()
        # TODO:使用LLM自动分析结果

    elif args_tuple[1][1] == "DWT":
        _img1,_img2=fn_dwt_handler(args_tuple)
        _img4=fn_noise_handler(args_tuple)
        _text1 = format_force_results()
        _text2 = format_sound_results()

    elif args_tuple[1][1] == "Hilbert":
        _img1,_img2=fn_hilbert_handler(args_tuple)
        _img4=fn_noise_handler(args_tuple)
        _text1 = format_force_results()
        _text2 = format_sound_results()

    elif args_tuple[1][1] == "EMD_": # 暂时下线
        _img1,_img2=fn_emd_handler(args_tuple)
        _img4=fn_noise_handler(args_tuple)
        _text1 = format_force_results()
        _text2 = format_sound_results()
    
    return _img1,_img2,_img3,_img4,_img5,_img6,_text1,_text2,g_dataframe1_value,g_dataframe2_value

def fn_stft_handler(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    STFT方式处理信号
    '''
    start_time = time.time()  # 记录开始时间

    _img1, _img2 = fn_stft_draw_3d_spectrogram(args_tuple)

    end_time = time.time()  # 记录结束时间
    execution_time = (end_time - start_time) * 1000  # 计算执行时间(毫秒)
    print(f"执行时间: {execution_time:.2f} ms")

    return _img1, _img2

def fn_dwt_handler(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    DWT方式处理信号
    '''
    start_time = time.time()  # 记录开始时间

    _img1, _img2 = fn_dwt_draw_3d_spectrogram(args_tuple)

    end_time = time.time()  # 记录结束时间
    execution_time = (end_time - start_time) * 1000  # 计算执行时间(毫秒)
    print(f"执行时间: {execution_time:.2f} ms")

    return _img1, _img2

def fn_hilbert_handler(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    希尔伯特方式处理信号
    '''
    start_time = time.time()  # 记录开始时间

    _img1, _img2 = fn_hilbert_draw_3d_spectrogram(args_tuple)

    end_time = time.time()  # 记录结束时间
    execution_time = (end_time - start_time) * 1000  # 计算执行时间(毫秒)
    print(f"执行时间: {execution_time:.2f} ms")

    return _img1, _img2

def fn_emd_handler(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    经验模态分解方式处理信号
    '''
    start_time = time.time()  # 记录开始时间

    _img1, _img2 = fn_emd_draw_3d_spectrogram(args_tuple)

    end_time = time.time()  # 记录结束时间
    execution_time = (end_time - start_time) * 1000  # 计算执行时间(毫秒)
    print(f"执行时间: {execution_time:.2f} ms")

    return _img1, _img2

def fn_noise_handler(args_tuple:tuple)->gradio.Image:
    '''
    噪声处理(滤波)
    '''
    # 读取CSV文件
    data = pd.read_csv(g_file_selected, names=["力信号", "声信号"])

    # 提取声信号,并转换为NumPy数组
    sound_signal = data["声信号"].values.astype(np.float32)

    if(args_tuple[1][2]=="谱减滤波"):
        from librosa import mel_frequencies
        print("谱减滤波")
        signal = sound_signal

        # 进行谱减法滤波处理
        filtered_signal = spectral_subtraction(signal)
        # 生成随机文件名
        random_int = random.randint(100, 999)
        output_filename = f"{file_output_path}/去噪_谱减法_{random_int}.wav"
        print(f"保存到:{output_filename}")
        # 保存到文件
        save_to_wav(output_filename,filtered_signal)

        # 分析滤波后信号
        fn_calc_sound_dataframe_noise(signal,filtered_signal)

        # 使用librosa计算滤波后信号的STFT
        stft = librosa.stft(filtered_signal)

        # 计算梅尔频谱
        melspec = librosa.feature.melspectrogram(y=filtered_signal, sr=sample_rate_sound)

        # 将梅尔频率数据做对数变换
        logmelspec = librosa.power_to_db(melspec)

        # 绘制梅尔频谱图
        plt.figure(facecolor='white')  # 设置背景颜色为白色
        ax1 = plt.subplot(2, 1, 1)  # 创建一个子图用于显示梅尔频谱图
        librosa.display.specshow(logmelspec, sr=sample_rate_sound, x_axis='time', y_axis='mel', cmap='Blues')
        plt.colorbar(format='%+2.0f dB')  # 添加颜色条表示幅度(分贝)
        plt.title("Mel Spectrogram After Spectral Subtraction")
        plt.xlabel("Time (s)")
        plt.ylabel("Mel Frequency")

        # 添加显示滤波后的时域幅度图
        ax2 = plt.subplot(2, 1, 2)  # 在(2,2,2)位置创建子图用于显示时域幅度图
        ax2.plot(np.arange(len(filtered_signal)) / sample_rate_sound ,filtered_signal)  # 绘制时域幅度图
        ax2.set_xlabel("Time (s)")
        ax2.set_ylabel("Amplitude")

        # 将图像保存为字节流
        img_buf = BytesIO()
        plt.savefig(img_buf, format="png")
        img_buf.seek(0)
        return PILImage.open(img_buf)

    elif(args_tuple[1][2]=="维纳滤波"):
        print("维纳滤波")
        # 将离散点坐标转换为信号序列
        signal = sound_signal
        # 进行维纳滤波处理
        filtered_signal = wiener_filtering(signal)
        # 生成随机文件名
        random_int = random.randint(100, 999)
        output_filename = f"{file_output_path}/去噪_维纳法_{random_int}.wav"
        # 保存到文件
        save_to_wav(output_filename,filtered_signal)
        
        # 计算梅尔频谱
        melspec = librosa.feature.melspectrogram(y=filtered_signal, sr=sample_rate_sound)

        # 将梅尔频率数据做对数变换
        logmelspec = librosa.power_to_db(melspec)

        # 绘制梅尔频谱图
        plt.figure(facecolor='white')  # 设置背景颜色为白色
        ax1 = plt.subplot(2, 1, 1)  # 创建一个子图用于显示梅尔频谱图
        librosa.display.specshow(logmelspec, sr=sample_rate_sound, x_axis='time', y_axis='mel', cmap='Blues')
        plt.colorbar(format='%+2.0f dB')  # 添加颜色条表示幅度(分贝)
        plt.title("Mel Spectrogram After Wiener Subtraction")
        plt.xlabel("Time (s)")
        plt.ylabel("Mel Frequency")

        ax2 = plt.subplot(2, 1, 2)  # 在(2,2,2)位置创建子图用于显示时域幅度图
        ax2.plot(np.arange(len(filtered_signal)) / sample_rate_sound ,filtered_signal)  # 绘制时域幅度图
        ax2.set_xlabel("Time (s)")
        ax2.set_ylabel("Amplitude")
        #ax2.set_title("Time-domain Amplitude After Wiener Filtering")
        # 将图像保存为字节流
        img_buf = BytesIO()
        plt.savefig(img_buf, format="png")
        img_buf.seek(0)
        plt.close()
        return PILImage.open(img_buf)

    elif(args_tuple[1][2]=="卡尔曼滤波"):
        print("卡尔曼滤波")
        # 将离散点坐标转换为信号序列
        signal = sound_signal
        # 进行卡尔曼法滤波处理
        filtered_signal = kalman_filter(signal)
        # 生成随机文件名
        random_int = random.randint(100, 999)
        output_filename = f"{file_output_path}/去噪_卡尔曼滤波_{random_int}.wav"
        # 保存到文件
        save_to_wav(output_filename,filtered_signal)
        
        # 计算梅尔频谱
        melspec = librosa.feature.melspectrogram(y=filtered_signal, sr=sample_rate_sound)

        # 将梅尔频率数据做对数变换
        logmelspec = librosa.power_to_db(melspec)

        # 绘制梅尔频谱图
        plt.figure(facecolor='white')  # 设置背景颜色为白色
        ax1 = plt.subplot(2, 1, 1)  # 创建一个子图用于显示梅尔频谱图
        librosa.display.specshow(logmelspec, sr=sample_rate_sound, x_axis='time', y_axis='mel', cmap='Blues')
        plt.colorbar(format='%+2.0f dB')  # 添加颜色条表示幅度(分贝)
        plt.title("Mel Spectrogram After Kalman Subtraction")
        plt.xlabel("Time (s)")
        plt.ylabel("Mel Frequency")

        ax2 = plt.subplot(2, 1, 2)  # 在(2,2,2)位置创建子图用于显示时域幅度图
        ax2.plot(np.arange(len(filtered_signal)) / sample_rate_sound ,filtered_signal)  # 绘制时域幅度图
        ax2.set_xlabel("Time (s)")
        ax2.set_ylabel("Amplitude")
        #ax2.set_title("Time-domain Amplitude After Kalman Filtering")
        # 将图像保存为字节流
        img_buf = BytesIO()
        plt.savefig(img_buf, format="png")
        img_buf.seek(0)
        plt.close()
        return PILImage.open(img_buf)

def fn_stft_draw_3d_spectrogram(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    绘制STFT的3维频谱图
    输出:字节流
    '''

    # 计算频谱
    frequencies, times, stft_result_force = stft(x=force_signal, fs=sample_rate_force, window=args_tuple[1][5], nperseg=args_tuple[1][3], noverlap=args_tuple[1][4],nfft=args_tuple[1][6])
    frequencies, times, stft_result_sound = stft(x=sound_signal, fs=sample_rate_sound, window=args_tuple[1][5], nperseg=args_tuple[1][14], noverlap=args_tuple[1][15],nfft=args_tuple[1][6])

    # 绘制三维频谱图(力信号)
    fig = plt.figure(figsize=(10, 10))  # 设置图片大小为10x10英寸
    ax = fig.add_subplot(111, projection='3d')  # 添加一个3D子图,占据整个figure
    X, Y = np.meshgrid(times, frequencies)
    ax.plot_surface(X, Y, np.abs(stft_result_force), cmap='viridis')
    ax.set_xlabel('Time (s)')
    ax.set_ylabel('Frequency (Hz)')
    ax.set_zlabel('Amplitude')
    ax.set_title('3D Spectrogram of Force Signal')
    # 将图像保存为字节流
    img1_buf = BytesIO()
    plt.savefig(img1_buf, format="png")
    img1_buf.seek(0)
    plt.close()
    # 显示图片
    print("显示力信号频谱图")

    # 绘制三维频谱图(声信号)
    fig = plt.figure(figsize=(10, 10))  # 设置图片大小为10x10英寸
    ax = fig.add_subplot(111, projection='3d')  # 添加一个3D子图,占据整个figure
    X, Y = np.meshgrid(times, frequencies)
    ax.plot_surface(X, Y, np.abs(stft_result_sound), cmap='viridis')
    ax.set_xlabel('Time (s)')
    ax.set_ylabel('Frequency (Hz)')
    ax.set_zlabel('Amplitude')
    ax.set_title('3D Spectrogram of Sound Signal')
    # 将图像保存为字节流
    img2_buf = BytesIO()
    plt.savefig(img2_buf, format="png")
    img2_buf.seek(0)
    plt.close()
    # 显示图片
    print("显示声信号频谱图")

    # 返回值
    return PILImage.open(img1_buf),PILImage.open(img2_buf)

def fn_dwt_draw_3d_spectrogram(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    绘制DWT的3维频谱图
    输出:字节流
    '''
    # 判断小波方式并进行处理
    if(args_tuple[1][9]=="Daubechies"):
        _wavelet = "db4"
    elif(args_tuple[1][9]=="Haar"):
        _wavelet = "haar"
    elif(args_tuple[1][9]=="Morlet"):
        _wavelet = "morl"
    else:
        print("use db3")
        _wavelet = "db3"

    _img1 = fn_plot_dwt_spectrum(force_signal,sample_rate_force,"force",_wavelet)
    _img2 = fn_plot_dwt_spectrum(sound_signal,sample_rate_sound,"sound",_wavelet)

    # 返回图像
    return _img1,_img2

def spectral_subtraction(signal, alpha=0.01):
    '''
    谱减法滤波处理函数
    '''
    # 提取前0.5s的噪声部分
    noise_part = signal[:int(0.5 * len(signal))]
    
    # 计算信号的功率谱
    signal_spectrum = np.abs(np.fft.fft(signal)) ** 2
    
    # 计算噪声的功率谱
    noise_spectrum = np.abs(np.fft.fft(noise_part)) ** 2
    noise_spectrum = np.mean(noise_spectrum[int(len(noise_spectrum) / 2):])
    
    # 从信号功率谱中减去噪声功率谱
    subtracted_spectrum = np.maximum(signal_spectrum - alpha * noise_spectrum, 0)
    
    # 应用增益函数并恢复信号
    filtered_signal = np.real(np.fft.ifft(np.sqrt(subtracted_spectrum) * np.exp(1j * np.angle(np.fft.fft(signal)))))

    # 计算SNR
    snr = 10 * np.log10(np.sum(signal_spectrum) / np.sum(noise_spectrum))
    
    # 计算RMSE
    rmse = np.sqrt(np.mean((signal - filtered_signal) ** 2))
    
    # 计算滤波前后频谱平滑度
    smoothness_before = np.mean(np.diff(signal_spectrum))
    smoothness_after = np.mean(np.diff(subtracted_spectrum))
    
    # 计算滤波前后频谱滚降
    roll_off_before = np.sum(signal_spectrum[int(len(signal_spectrum) / 2):])
    roll_off_after = np.sum(subtracted_spectrum[int(len(subtracted_spectrum) / 2):])
    
    # 打印结果
    print(f"SNR: {snr:.2f} dB")
    print(f"RMSE: {rmse:.2f}")
    print(f"∆ Smoothness: {(smoothness_after-smoothness_before):.6f}")
    print(f"∆ Roll-off: {(roll_off_after-roll_off_before):.2f}")
    
    return filtered_signal

def fn_hilbert_draw_3d_spectrogram(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    绘制基于希尔伯特变换的二维图像
    输出:字节流
    '''

    # 读取CSV文件
    data = pd.read_csv(g_file_selected, names=["力信号", "声信号"])

    # 提取力信号和声信号,并转换为NumPy数组
    force_signal = data["力信号"].values.astype(np.float32)
    sound_signal = data["声信号"].values.astype(np.float32)

    # 进行希尔伯特变换
    analytic_force_signal = scipy.signal.hilbert(force_signal)
    analytic_sound_signal = scipy.signal.hilbert(sound_signal)
    
    # 绘制力信号的希尔伯特变换
    plt.figure(figsize=(10, 8))
    plt.plot(np.real(analytic_force_signal), label='Real Part')
    plt.plot(np.imag(analytic_force_signal), label='Imaginary Part')
    plt.title('Hilbert Transform of Force Signal')
    plt.legend()
    # 将图像保存为字节流
    img1_buf = BytesIO()
    plt.savefig(img1_buf, format="png")
    img1_buf.seek(0)
    plt.close()
    print("显示力信号希尔伯特变换图片")

    # 绘制声信号的希尔伯特变换
    plt.figure(figsize=(10, 8))
    plt.plot(np.real(analytic_sound_signal), label='Real Part')
    plt.plot(np.imag(analytic_sound_signal), label='Imaginary Part')
    plt.title('Hilbert Transform of Sound Signal')
    plt.legend()
    # 将图像保存为字节流
    img2_buf = BytesIO()
    plt.savefig(img2_buf, format="png")
    img2_buf.seek(0)
    plt.close()
    # 显示图片
    print("显示声信号希尔伯特变换图片")

    # 返回值
    return PILImage.open(img1_buf),PILImage.open(img2_buf)

def emd(signal):
    '''
    EMD变换函数
    '''
    # 创建EMD对象
    emd = EMD()
    
    # 执行EMD变换
    imfs = emd.emd(signal)
    
    return imfs

def fn_emd_draw_3d_spectrogram(args_tuple:tuple)->(gradio.Image,gradio.Image):
    '''
    绘制基于EMD变换的时域幅度2D图
    输出:字节流
    '''
    # 计算EMD变换
    imfs_force = emd(force_signal)
    imfs_sound = emd(sound_signal)

    # 绘制时域幅度2D图(力信号)
    fig, ax = plt.subplots()
    times = np.arange(len(force_signal)) / sample_rate_force
    ax.plot(times, np.abs(imfs_force[0]))  # 绘制第一个IMF分量的时域幅度
    ax.set_xlabel('Time (s)')
    ax.set_ylabel('Amplitude')
    ax.set_title('Time Domain Amplitude of Force Signal based on EMD Transform')
    # 将图像保存为字节流
    img1_buf = BytesIO()
    plt.savefig(img1_buf, format="png")
    img1_buf.seek(0)
    plt.close()
    # 显示图片
    print("显示力信号时域幅度图")

    # 绘制时域幅度2D图(声信号)
    fig, ax = plt.subplots()
    times = np.arange(len(sound_signal)) / sample_rate_sound
    ax.plot(times, np.abs(imfs_sound[0]))  # 绘制第一个IMF分量的时域幅度
    ax.set_xlabel('Time (s)')
    ax.set_ylabel('Amplitude')
    ax.set_title('Time Domain Amplitude of Sound Signal based on EMD Transform')
    # 将图像保存为字节流
    img2_buf = BytesIO()
    plt.savefig(img2_buf, format="png")
    img2_buf.seek(0)
    plt.close()
    # 显示图片
    print("显示声信号时域幅度图")

    # 返回值
    return PILImage.open(img1_buf), PILImage.open(img2_buf)

def wiener_filtering(signal, alpha=0.01):
    '''
    维纳滤波处理函数
    '''
    # 计算信号的功率谱
    signal_spectrum = np.abs(np.fft.fft(signal)) ** 2
    
    # 对信号的功率谱进行插值,使其与信号的长度相匹配
    interpolated_spectrum = np.interp(np.arange(len(signal)), np.arange(len(signal_spectrum)), signal_spectrum)

    # 计算噪声的功率谱(这里简化为信号功率谱的均值)
    noise_spectrum = np.mean(interpolated_spectrum)
    
    # 计算维纳滤波器增益
    gain = interpolated_spectrum / (interpolated_spectrum + alpha * noise_spectrum)
    
    # 将interpolated_spectrum的长度扩展为与gain相同的长度
    interpolated_spectrum = np.pad(interpolated_spectrum, (0, len(gain) - len(interpolated_spectrum)), 'constant')

    # 应用增益函数并恢复信号
    filtered_signal = np.real(np.fft.ifft(gain * np.fft.fft(signal)))
    
    # 计算SNR
    snr = 10 * np.log10(np.sum(signal_spectrum) / np.sum(noise_spectrum))
    
    # 计算RMSE
    rmse = np.sqrt(np.mean((signal - filtered_signal) ** 2))
    
    # 计算滤波前后频谱平滑度
    smoothness_before = np.mean(np.diff(signal_spectrum))
    smoothness_after = np.mean(np.diff(gain * interpolated_spectrum))
    
    # 计算滤波前后频谱滚降
    roll_off_before = np.sum(signal_spectrum[int(len(signal_spectrum)):])
    # roll_off_after = np.sum(gain * interpolated_spectrum[int(len(interpolated_spectrum)):])
    roll_off_after = np.sum(gain * interpolated_spectrum[:])
    
    # 打印结果
    print(f"SNR: {snr:.2f} dB")
    print(f"RMSE: {rmse:.2f}")
    print(f"∆ Smoothness: {(smoothness_after-smoothness_before):.6f}")
    print(f"∆ Roll-off: {(roll_off_after-roll_off_before):.2f}")
    
    return filtered_signal

def kalman_filter(signal, Q=0.01, R=1.0):
    """
    一维卡尔曼滤波函数
    :param signal: 输入信号
    :param Q: 过程噪声协方差
    :param R: 测量噪声协方差
    :return: 滤波后的信号
    """
    # 初始化状态变量和协方差矩阵
    x_est = np.zeros_like(signal)
    P = np.zeros_like(signal)
    x_est[0] = signal[0]
    P[0] = 1.0

    # 卡尔曼滤波递归过程
    for i in range(1, len(signal)):
        # 预测步骤
        x_pred = x_est[i-1]
        P_pred = P[i-1] + Q

        # 更新步骤
        K = P_pred / (P_pred + R)  # 计算卡尔曼增益
        x_est[i] = x_pred + K * (signal[i] - x_pred)  # 更新状态估计
        P[i] = (1 - K) * P_pred  # 更新误差协方差
    
    # 计算信号的功率谱
    signal_spectrum = np.abs(np.fft.fft(signal)) ** 2
    
    # 计算噪声的功率谱
    noise_spectrum = np.mean(signal_spectrum)
    
    # 计算滤波后的信号的功率谱
    filtered_signal_spectrum = np.abs(np.fft.fft(x_est)) ** 2
    
    # 计算SNR
    snr = 10 * np.log10(np.sum(signal_spectrum) / np.sum(noise_spectrum))
    
    # 计算RMSE
    rmse = np.sqrt(np.mean((signal - x_est) ** 2))
    
    # 计算滤波前后频谱平滑度
    smoothness_before = np.mean(np.diff(signal_spectrum))
    smoothness_after = np.mean(np.diff(filtered_signal_spectrum))
    
    # 计算滤波前后频谱滚降
    roll_off_before = np.sum(signal_spectrum[int(len(signal_spectrum) / 2):])
    roll_off_after = np.sum(filtered_signal_spectrum[int(len(filtered_signal_spectrum) / 2):])
    
    # 打印结果
    print(f"SNR: {snr:.2f} dB")
    print(f"RMSE: {rmse:.2f}")
    print(f"∆ Smoothness: {(smoothness_after-smoothness_before):.6f}")
    print(f"∆ Roll-off: {(roll_off_after-roll_off_before):.2f}")
    
    return x_est

def save_to_wav(output_filename,filtered_signal):
    '''
    将滤波后的信号保存为WAV文件
    '''
    with wave.open(output_filename, 'wb') as output_file:
        # 设置WAV文件参数
        n_channels = 1  # 单声道
        sample_width = 2  # 采样宽度(以字节为单位)
        framerate = 44100  # 帧率(Hz)
        frames = filtered_signal.astype(np.int16)  # 将滤波后的信号转换为16位整数
        output_file.setparams((n_channels, sample_width, framerate, len(frames), 'NONE', 'Uncompressed'))
        output_file.writeframes(frames.tobytes())  # 写入帧数据

def fn_convert_csv_to_wav(wav_file):
    '''
    转换csv文件为wav信号
    '''
    # 读取CSV文件
    data = pd.read_csv(wav_file, names=["力信号", "声信号"])
    # 提取声信号列,并转换为NumPy数组
    audio_signal = data["声信号"].values.astype(np.int16)
    # 设置WAV文件参数
    n_channels = 1  # 单声道
    sample_width = 2  # 采样宽度(以字节为单位)
    framerate = 40000  # 帧率(Hz)
    frames = audio_signal  # 音频信号数据
    # 写入WAV文件
    wav_file_path = f"{file_output_path}/csv_to_wav_{random.randint(100,999)}.wav"
    with wave.open(wav_file_path, 'wb') as output_file:
        output_file.setparams((n_channels, sample_width, framerate, len(frames), 'NONE', 'Uncompressed'))
        output_file.writeframes(frames.tobytes())
    return gradio.Audio(wav_file_path)

def format_sound_results() -> str:
    '''
    格式化结果为文本
    声学曲线部分:
    1. 声学信号尖峰数量
    2. 最大幅值与最小幅值之差
    3. 声学曲线下面积
    4. 声学信号最大幅值
    5. 声学信号平均幅值
    6. 声学信号峰值的平均值
    7. 幅值标准偏差
    8. 声学曲线长度
    9. 声学信号幅值平方和
    10. 声学信号功率谱的平均幅值
    11. 声学Richardson表观分形维数
    12. 声学Kolmogorov表观分形维数
    13. 声学Minkowski表观分形维数
    14. 声学Korcak表观分形维数
    15. 声信号滤波前后信噪比
    16. 声信号时频分辨率
    '''
    text = ""
    
    # 定义各个结果的索引
    indices = {
        'peak_count': 0,
        'amplitude_difference': 1,
        'area_under_curve': 2,
        'max_amplitude': 3,
        'average_amplitude': 4,
        'average_peak_value': 5,
        'amplitude_stddev': 6,
        'curve_length': 7,
        'amplitude_squared_sum': 8,
        'average_power_spectrum_amplitude': 9,
        'richardson_fractal_dimension': 10,
        'kolmogorov_fractal_dimension': 11,
        'minkowski_fractal_dimension': 12,
        'korcak_fractal_dimension': 13,
        'snr': 14,
        'time_frequency_resolution': 15
    }
    
    # 声学曲线部分
    t1 = f"1. 声学信号尖峰数量:{'N/A' if g_dataframe2_data[indices['peak_count']] is None or g_dataframe2_data[indices['peak_count']] == '' else g_dataframe2_data[indices['peak_count']]} 个\n"
    t2 = f"2. 最大幅值与最小幅值之差:{'N/A' if g_dataframe2_data[indices['amplitude_difference']] is None or g_dataframe2_data[indices['amplitude_difference']] == '' else g_dataframe2_data[indices['amplitude_difference']]} 单位\n"
    t3 = f"3. 声学曲线下面积:{'N/A' if g_dataframe2_data[indices['area_under_curve']] is None or g_dataframe2_data[indices['area_under_curve']] == '' else g_dataframe2_data[indices['area_under_curve']]} 单位\n"
    t4 = f"4. 声学信号最大幅值:{'N/A' if g_dataframe2_data[indices['max_amplitude']] is None or g_dataframe2_data[indices['max_amplitude']] == '' else g_dataframe2_data[indices['max_amplitude']]} 单位\n"
    t5 = f"5. 声学信号平均幅值:{'N/A' if g_dataframe2_data[indices['average_amplitude']] is None or g_dataframe2_data[indices['average_amplitude']] == '' else g_dataframe2_data[indices['average_amplitude']]} 单位\n"
    t6 = f"6. 声学信号峰值的平均值:{'N/A' if g_dataframe2_data[indices['average_peak_value']] is None or g_dataframe2_data[indices['average_peak_value']] == '' else g_dataframe2_data[indices['average_peak_value']]} 单位\n"
    t7 = f"7. 幅值标准偏差:{'N/A' if g_dataframe2_data[indices['amplitude_stddev']] is None or g_dataframe2_data[indices['amplitude_stddev']] == '' else g_dataframe2_data[indices['amplitude_stddev']]} 单位\n"
    t8 = f"8. 声学曲线长度:{'N/A' if g_dataframe2_data[indices['curve_length']] is None or g_dataframe2_data[indices['curve_length']] == '' else g_dataframe2_data[indices['curve_length']]} 单位\n"
    t9 = f"9. 声学信号幅值平方和:{'N/A' if g_dataframe2_data[indices['amplitude_squared_sum']] is None or g_dataframe2_data[indices['amplitude_squared_sum']] == '' else g_dataframe2_data[indices['amplitude_squared_sum']]} 单位\n"
    t10 = f"10. 声学信号功率谱的平均幅值:{'N/A' if g_dataframe2_data[indices['average_power_spectrum_amplitude']] is None or g_dataframe2_data[indices['average_power_spectrum_amplitude']] == '' else g_dataframe2_data[indices['average_power_spectrum_amplitude']]} 单位\n"
    t11 = f"11. 声学Richardson表观分形维数:{'N/A' if g_dataframe2_data[indices['richardson_fractal_dimension']] is None or g_dataframe2_data[indices['richardson_fractal_dimension']] == '' else g_dataframe2_data[indices['richardson_fractal_dimension']]} 单位\n"
    t12 = f"12. 声学Kolmogorov表观分形维数:{'N/A' if g_dataframe2_data[indices['kolmogorov_fractal_dimension']] is None or g_dataframe2_data[indices['kolmogorov_fractal_dimension']] == '' else g_dataframe2_data[indices['kolmogorov_fractal_dimension']]} 单位\n"
    t13 = f"13. 声学Minkowski表观分形维数:{'N/A' if g_dataframe2_data[indices['minkowski_fractal_dimension']] is None or g_dataframe2_data[indices['minkowski_fractal_dimension']] == '' else g_dataframe2_data[indices['minkowski_fractal_dimension']]} 单位\n"
    t14 = f"14. 声学Korcak表观分形维数:{'N/A' if g_dataframe2_data[indices['korcak_fractal_dimension']] is None or g_dataframe2_data[indices['korcak_fractal_dimension']] == '' else g_dataframe2_data[indices['korcak_fractal_dimension']]} 单位\n"
    
    text = t1 + t2 + t3 + t4 + t5 + t6 + t7 + t8 + t9 + t10 + t11 + t12 + t13 + t14
    
    return text

def format_force_results() -> str:
    '''
    格式化结果为文本
    力学曲线部分:
    1. 第一个力峰与起点的频率
    2. 最大力与起点的频率
    3. 第一个力峰与力谷之差
    4. 第一个力谷处的力值
    5. 力学曲线力峰的数量
    6. 屈服力与终点力之差
    7. 曲线与横坐标围成的面积
    8. 力学曲线的长度
    9. 力的平均值
    10. 力峰与力谷之差的平均值
    11. 第一个力峰处的力值
    12. 杨氏模量
    13. 位移8mm对应的力值
    15. 力学曲线的最大力峰值
    16. 屈服力与终点力之比
    17. 力学曲线力峰的平均值
    18. 力学信号功率谱的平均幅值
    19. 力学Richardson表观分形维数
    20. 力学Kolmogorov表观分形维数
    21. 力学Minkowski表观分形维数
    22. 力学Korcak表观分形维数
    '''
    text = ""

    # 引用全局变量
    global g_dataframe1_data
    
    # 定义各个结果的索引
    indices = {
        'first_peak_frequency': 0,
        'max_force_frequency': 1,
        'first_peak_trough_difference': 2,
        'first_trough_force': 3,
        'peak_count': 4,
        'yield_force_difference': 5,
        'area_under_curve': 6,
        'curve_length': 7,
        'average_force': 8,
        'average_peak_trough_difference': 9,
        'first_peak_force': 10,
        'youngs_modulus': 11,
        'force_at_8mm': 12,
        'max_force_peak': 13,
        'yield_to_final_force_ratio': 14,
        'average_force_peak': 15,
        'average_power_spectrum_amplitude': 16,
        'richardson_fractal_dimension': 17,
        'kolmogorov_fractal_dimension': 18,
        'minkowski_fractal_dimension': 19,
        'korcak_fractal_dimension': 20,
    }
    
    # 力学曲线部分
    t1 = f"1. 第一个力峰与起点的频率:{'N/A' if g_dataframe1_data[indices['first_peak_frequency']] is None or g_dataframe1_data[indices['first_peak_frequency']] == '' else g_dataframe1_data[indices['first_peak_frequency']]} Hz\n"
    t2 = f"2. 最大力与起点的频率:{'N/A' if g_dataframe1_data[indices['max_force_frequency']] is None or g_dataframe1_data[indices['max_force_frequency']] == '' else g_dataframe1_data[indices['max_force_frequency']]} Hz\n"
    t3 = f"3. 第一个力峰与力谷之差:{'N/A' if g_dataframe1_data[indices['first_peak_trough_difference']] is None or g_dataframe1_data[indices['first_peak_trough_difference']] == '' else g_dataframe1_data[indices['first_peak_trough_difference']]} 单位\n"
    t4 = f"4. 第一个力谷处的力值:{'N/A' if g_dataframe1_data[indices['first_trough_force']] is None or g_dataframe1_data[indices['first_trough_force']] == '' else g_dataframe1_data[indices['first_trough_force']]} 单位\n"
    t5 = f"5. 力学曲线力峰的数量:{'N/A' if g_dataframe1_data[indices['peak_count']] is None or g_dataframe1_data[indices['peak_count']] == '' else g_dataframe1_data[indices['peak_count']]} 个\n"
    t6 = f"6. 屈服力与终点力之差:{'N/A' if g_dataframe1_data[indices['yield_force_difference']] is None or g_dataframe1_data[indices['yield_force_difference']] == '' else g_dataframe1_data[indices['yield_force_difference']]} 单位\n"
    t7 = f"7. 曲线与横坐标围成的面积:{'N/A' if g_dataframe1_data[indices['area_under_curve']] is None or g_dataframe1_data[indices['area_under_curve']] == '' else g_dataframe1_data[indices['area_under_curve']]} 单位\n"
    t8 = f"8. 力学曲线的长度:{'N/A' if g_dataframe1_data[indices['curve_length']] is None or g_dataframe1_data[indices['curve_length']] == '' else g_dataframe1_data[indices['curve_length']]} 单位\n"
    t9 = f"9. 力的平均值:{'N/A' if g_dataframe1_data[indices['average_force']] is None or g_dataframe1_data[indices['average_force']] == '' else g_dataframe1_data[indices['average_force']]} 单位\n"
    t10 = f"10. 力峰与力谷之差的平均值:{'N/A' if g_dataframe1_data[indices['average_peak_trough_difference']] is None or g_dataframe1_data[indices['average_peak_trough_difference']] == '' else g_dataframe1_data[indices['average_peak_trough_difference']]} 单位\n"
    t11 = f"11. 第一个力峰处的力值:{'N/A' if g_dataframe1_data[indices['first_peak_force']] is None or g_dataframe1_data[indices['first_peak_force']] == '' else g_dataframe1_data[indices['first_peak_force']]} 单位\n"
    t12 = f"12. 杨氏模量:{'N/A' if g_dataframe1_data[indices['youngs_modulus']] is None or g_dataframe1_data[indices['youngs_modulus']] == '' else g_dataframe1_data[indices['youngs_modulus']]} 单位\n"
    t13 = f"13. 位移8mm对应的力值:{'N/A' if g_dataframe1_data[indices['force_at_8mm']] is None or g_dataframe1_data[indices['force_at_8mm']] == '' else g_dataframe1_data[indices['force_at_8mm']]} 单位\n"
    t14 = f"14. 力学曲线的最大力峰值:{'N/A' if g_dataframe1_data[indices['max_force_peak']] is None or g_dataframe1_data[indices['max_force_peak']] == '' else g_dataframe1_data[indices['max_force_peak']]} 单位\n"
    t15 = f"15. 屈服力与终点力之比:{'N/A' if g_dataframe1_data[indices['yield_to_final_force_ratio']] is None or g_dataframe1_data[indices['yield_to_final_force_ratio']] == '' else g_dataframe1_data[indices['yield_to_final_force_ratio']]}\n"
    t16 = f"16. 力学曲线力峰的平均值:{'N/A' if g_dataframe1_data[indices['average_force_peak']] is None or g_dataframe1_data[indices['average_force_peak']] == '' else g_dataframe1_data[indices['average_force_peak']]} 单位\n"
    t17 = f"17. 力学信号功率谱的平均幅值:{'N/A' if g_dataframe1_data[indices['average_power_spectrum_amplitude']] is None or g_dataframe1_data[indices['average_power_spectrum_amplitude']] == '' else g_dataframe1_data[indices['average_power_spectrum_amplitude']]} 单位\n"
    t18 = f"18. 力学Richardson表观分形维数:{'N/A' if g_dataframe1_data[indices['richardson_fractal_dimension']] is None or g_dataframe1_data[indices['richardson_fractal_dimension']] == '' else g_dataframe1_data[indices['richardson_fractal_dimension']]} 单位\n"
    t19 = f"19. 力学Kolmogorov表观分形维数:{'N/A' if g_dataframe1_data[indices['kolmogorov_fractal_dimension']] is None or g_dataframe1_data[indices['kolmogorov_fractal_dimension']] == '' else g_dataframe1_data[indices['kolmogorov_fractal_dimension']]} 单位\n"
    t20 = f"20. 力学Minkowski表观分形维数:{'N/A' if g_dataframe1_data[indices['minkowski_fractal_dimension']] is None or g_dataframe1_data[indices['minkowski_fractal_dimension']] == '' else g_dataframe1_data[indices['minkowski_fractal_dimension']]} 单位\n"
    t21 = f"21. 力学Korcak表观分形维数:{'N/A' if g_dataframe1_data[indices['korcak_fractal_dimension']] is None or g_dataframe1_data[indices['korcak_fractal_dimension']] == '' else g_dataframe1_data[indices['korcak_fractal_dimension']]} 单位\n"
    
    text = t1 + t2 + t3 + t4 + t5 + t6 + t7 + t8 + t9 + t10 + t11 + t12 + t13 + t14 + t15 + t16 + t17 + t18 + t19 + t20 + t21
    
    return text

def fn_calc_sound_dataframe_noise(original_signal, filtered_signal) -> None:
    '''
    分析声信号滤波结果:
    1. 声学信号平均幅值, g_dataframe2_data[4]: str
    2. 声学信号峰值的平均值, g_dataframe2_data[5]: str
    '''
    global g_dataframe2_data  # 引用全局变量
    global g_dataframe2_value # 引用全局变量
    
    # 计算声学信号的平均幅值
    avg_amplitude = np.mean(np.abs(filtered_signal))
    g_dataframe2_data[4] = str(avg_amplitude)  # 更新全局变量
    g_dataframe2_value[4][1] = str(avg_amplitude) # 更新全局变量
    
    # 计算声学信号峰值的平均值(这里假设峰值是信号中的局部最大值)
    peaks, _ = scipy.signal.find_peaks(np.abs(filtered_signal))  # 使用scipy的find_peaks函数来找到峰值,需要先导入from scipy.signal import find_peaks
    avg_peak_value = np.mean(filtered_signal[peaks])
    g_dataframe2_data[5] = str(avg_peak_value)  # 更新全局变量
    g_dataframe2_value[5][1] = str(avg_peak_value) # 更新全局变量
'''end 自定义函数'''

'''start 界面'''
# 构建Gradio界面Blocks上下文
with gradio.Blocks(css=gradio_css,theme='Taithrah/Minimal') as webui:
    # 占位盒子,防止页面显示不全
    gradio.HTML("""
    <div style="height: 400vh; width: 100%; background-color: #f5f5f5;"></div>
    <a href="#args_selector" id="args_selector_button"><i class="material-icons">回到参数选择区</i></a>
    """)
    gradio.Markdown("# 香梨·力声信号分析系统")
    with gradio.Column():# 纵向排列
        with gradio.Column(): # 纵向排列
            # csv数据文件导入
            gradio.Markdown("## csv数据文件导入")
            interface1 = gradio.Interface(fn=fn_preview_csv, inputs="file", outputs="audio", live=True, 
                             examples=csv_examples,description="导入csv文件或从如下csv文件列表中选择文件")
        with gradio.Column():# 纵向排列
            # 处理方式选择区
            with gradio.Row(elem_id="args_selector"): # 横向排列
                # 信号处理方式选择
                arg1 = gradio.Radio(["STFT", "DWT","Hilbert","EMD","Walsh-Hadamard","Goertzel","DFT"], label="信号处理方式", info="STFT:短时傅里叶变换,DWT:小波变换",value="STFT",interactive=True)
                # 去噪方式选择
                arg2 = gradio.Radio(["谱减滤波","维纳滤波","卡尔曼滤波"],label="去噪方式",value="谱减滤波",interactive=True)
            # 参数调节区
            gradio.Markdown("## STFT参数选择")
            with gradio.Row(): # 横向排列
                # STFT参数选择
                arg3 = gradio.Slider(nperseg_force, nperseg_force+20, value=25, label="力信号窗口长度Nperseg",step=1.0,
                                     info="窗口长度决定了每次傅里叶变换的数据量。较长的窗口可以提供更好的频率分辨率,但时间分辨率较差;较短的窗口则相反。窗口长度需要根据信号特性和分析需求进行选择。",
                                     visible=lambda :arg1.value == "STFT",interactive=True)
                arg4 = gradio.Slider(0, noverlap_force+20, value=12, label="力信号重叠Overlap",step=1.0,
                                     info="相邻窗口之间的重叠部分。增加重叠可以降低窗口边缘效应,提高时间分辨率。重叠部分通常取窗口长度的一半或更多。",
                                     visible=lambda :arg1.value == "STFT",interactive=True)
                arg14 = gradio.Slider(518, nperseg_sound+20, value=25, label="声信号窗口长度Nperseg",step=1.0,
                                     info="窗口长度决定了每次傅里叶变换的数据量。较长的窗口可以提供更好的频率分辨率,但时间分辨率较差;较短的窗口则相反。窗口长度需要根据信号特性和分析需求进行选择。",
                                     visible=lambda :arg1.value == "STFT",interactive=True)
                arg15 = gradio.Slider(0, noverlap_sound+20, value=12, label="声信号重叠Overlap",step=1.0,
                                     info="相邻窗口之间的重叠部分。增加重叠可以降低窗口边缘效应,提高时间分辨率。重叠部分通常取窗口长度的一半或更多。",
                                     visible=lambda :arg1.value == "STFT",interactive=True)
            with gradio.Row(): # 横向排列
                # STFT参数选择
                arg5 = gradio.Dropdown(["boxcar","triang","blackmar","hamming","hann","bartlett","flattop","parzen","bohman","blackmanharris","nuttall","barthann","cosine","exponential","tukey","taylor","lanczos"],value="hann",label="窗口类型Window Funtion",
                                       info="用于对信号进行加窗处理,以减少频谱泄漏。",
                                       visible=lambda :arg1.value == "STFT",interactive=True)
                arg6 = gradio.Slider(2, sample_rate_force, value=256, label="FFT点数FFT SIZE",step=2.0,
                                     info="进行傅里叶变换时的点数。FFT点数应该大于等于窗口长度,且通常为2的整数次幂,以便利用快速傅里叶变换(FFT)算法提高效率。",
                                     visible=lambda :arg1.value == "STFT",interactive=True)
                arg7 = gradio.Slider(sample_rate_force, sample_rate_force+100, value=sample_rate_force, label="采样频率Sample Rate",step=1.0,
                                     info="力信号的采样频率,决定了时间和频率分辨率的上限。采样频率越高,时间和频率分辨率越好。",
                                     visible=lambda :arg1.value == "STFT",interactive=False)
                arg13 = gradio.Slider(sample_rate_sound, sample_rate_sound+100, value=sample_rate_sound, label="采样频率Sample Rate",step=1.0,
                                     info="声信号的采样频率,决定了时间和频率分辨率的上限。采样频率越高,时间和频率分辨率越好。",
                                     visible=lambda :arg1.value == "STFT",interactive=False)
                arg8 = gradio.Dropdown(choices=["整个窗口","半个窗口","使用Overlap参数"], value="使用Overlap参数",label="时间步长Time Step",
                                     info="每次移动窗口的距离。时间步长决定了时间轴上的分辨率,较小的时间步长可以提供更高的时间分辨率。",
                                     visible=lambda :arg1.value == "STFT",interactive=False)
            gradio.Markdown("## DWT参数选择")
            with gradio.Row():  # 横向排列
                # DWT参数选择
                arg9 = gradio.Radio(["Haar", "Daubechies", "Morlet"], value="Daubechies", label="小波基函数Wavelet Function",
                                    info="不同的小波基函数具有不同的时频特性和适应性,因此需要根据信号特性和分析需求选择合适的小波基函数。",
                                    visible=lambda: arg1.value == "DWT", interactive=True)
                arg10 = gradio.Slider(1, 10, value=5, label="分解层数Decomposition Level",step=1.0,
                                    info="分解层数决定了小波变换的分辨率。较多的分解层数可以提供更好的频率分辨率,但计算复杂度也会增加。",
                                    visible=lambda: arg1.value == "DWT", interactive=True)
                arg11 = gradio.Radio(["周期延拓", "对称延拓", "零填充"], value="对称延拓", label="边界处理Boundary Handling",
                                    info="在实际应用中,信号的长度往往不是2的整数次幂,这就涉及到边界处理问题。选择合适的边界处理方式可以减小边界效应对分析结果的影响。",
                                    visible=lambda: arg1.value == "DWT", interactive=True)
                arg12 = gradio.Slider(0, 1, value=0.5, label="阈值Threshold",step=1.0,
                                    info="在小波变换中,可以通过设置阈值来去除噪声或提取特定特征。阈值的选择需要根据信号噪声水平和分析目的进行调整。",
                                    visible=lambda: arg1.value == "DWT", interactive=True)
            # 提交处理按钮
            fn_apply_args_btn = gradio.Button(value="应用如上参数",elem_classes="btn1")
        # 数据展示区
        with gradio.Row(): # 横向排列
            # 原图
            gradio.Image(value="src/pear_dsp_gnuradio_gradio/采摘0天香梨力声曲线.png",label="参考文献原始数据",show_download_button=False)
            # 导入原始同步力声,一幅图中同时显示该信号
            img5 = gradio.Image(value=create_original_line_plot(g_file_selected),label="导入原始同步力声,一幅图中同时显示该信号",show_download_button=False)
        with gradio.Row(): # 横向排列
            # 带噪声的声信号原图
            gradio.Image(value="src/pear_dsp_gnuradio_gradio/谱减法去噪前的声信号.png",label="参考文献原始数据",show_download_button=False)
            # 导入原始带噪声的声信号
            img6 = gradio.Image(value=create_original_line_plot2(g_file_selected),label="导入原始带噪声的声信号",show_download_button=False)
        with gradio.Row(): # 横向排列
            # 力信号频谱图
            img1 = gradio.Image(value="src/pear_dsp_gnuradio_gradio/static/default.png",label="力信号频谱图/处理图",show_download_button=False,interactive=False)
            # 声信号频谱图
            img2 = gradio.Image(value="src/pear_dsp_gnuradio_gradio/static/default.png",label="声信号频谱图/处理图",show_download_button=False,interactive=False)
        with gradio.Row(): # 横向排列
            # 去噪前声信号
            img3 = gradio.Image(value=create_original_line_plot2(g_file_selected),label="去噪前声信号",show_download_button=False,interactive=False)
            # 去噪后声信号
            img4 = gradio.Image(value="src/pear_dsp_gnuradio_gradio/static/default.png",label="去噪后声信号",show_download_button=False,interactive=False)
        gradio.Markdown("## 分析结果")
        with gradio.Row(): # 横向排列
            text1 = gradio.Textbox(show_copy_button=True,label="力信号分析结果",max_lines=22,interactive=False)
            text2 = gradio.Textbox(show_copy_button=True,label="声信号分析结果",max_lines=21,interactive=False)

            text1.value = format_force_results()
            text2.value = format_sound_results()

        with gradio.Row(): # 横向排列
            dataframe1 = gradio.Dataframe(label="力信号分析结果整理",headers=["参数","值","单位"])
            dataframe2 = gradio.Dataframe(label="声信号分析结果整理",headers=["参数","值","单位"])

        # 绑定按钮功能,将args1~args12传递给fn_apply_args函数
        fn_apply_args_btn.click(fn=fn_apply_args,inputs=[arg1,arg2,arg3,arg4,arg5,arg6,arg7,arg8,arg9,arg10,arg11,arg12,arg13,arg14,arg15,img1,img2],outputs=[img1,img2,img3,img4,img5,img6,text1,text2,dataframe1,dataframe2])
'''end 界面'''

'''start 主函数'''
def main_function():
    print("hello gradio")
    # 启动gradio
    run_gradio()

'''end 主函数'''

运行

cd $PWD
pip install rye
rye sync
rye run main

效果

运行效果

数据来源

  1. 周婷, 莫小明, 查志华, 张金阁, 吴杰. 基于力声信号锯齿化多特征融合的香梨脆度评价[J]. 农业工程学报, 2022, 38(13): 305-312. doi: 10.11975/j.issn.1002-6819.2022.13.033.
  2. 张金阁,周婷,王鹏,等. 香梨脆度的力声同步检测[J]. 农业工程学报,2021,37(1):290-298.