吾爱破解 - 52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 704|回复: 8
收起左侧

[求助] PY中出错如何处理呢?

[复制链接]
jtwc 发表于 2023-9-12 19:24
各位老师,PY中出错如何处理呢?谢谢了
Traceback (most recent call last):
  File "E:/副本.py", line 64, in <module>
    csv_to_dataset("daily.csv")
  File "E:/副本.py", line 33, in csv_to_dataset
    y_normaliser.fit(next_day_open_values)
  File "C:\ProgramData\Anaconda3\envs\TF2.1\lib\site-packages\sklearn\preprocessing\_data.py", line 416, in fit
    return self.partial_fit(X, y)
  File "C:\ProgramData\Anaconda3\envs\TF2.1\lib\site-packages\sklearn\preprocessing\_data.py", line 458, in partial_fit
    force_all_finite="allow-nan",
  File "C:\ProgramData\Anaconda3\envs\TF2.1\lib\site-packages\sklearn\base.py", line 566, in _valIDAte_data
    X = check_array(X, **check_params)
  File "C:\ProgramData\Anaconda3\envs\TF2.1\lib\site-packages\sklearn\utils\validation.py", line 808, in check_array
    % (n_samples, array.shape, ensure_min_samples, context)
ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required by MinMaxScaler.
[Python] 纯文本查看 复制代码
import pandas as pd
import matplotlib.pyplot as plt
from keras import layers
import tensorflow as tf
from keras.models import Model
from keras.layers import Dense, Dropout, LSTM, Input, Activation, concatenate
import numpy as np
from keras import optimizers
from sklearn import preprocessing


np.random.seed(4)
tf.random.set_seed(4)
history_points = 50

def csv_to_dataset(csv_path):
    data = pd.read_csv(csv_path)
    data = data.drop('date', axis=1)
    data = data.drop(0, axis=0)
    data_normaliser = preprocessing.MinMaxScaler()
    data_normalised = data_normaliser.fit_transform(data)

    # using the last {history_points} open high low close volume data points, predict the next open value
    ohlcv_histories_normalised = np.array(
        [data_normalised[i: i + history_points].copy() for i in range(len(data_normalised) - history_points)])
    next_day_open_values_normalised = np.array(
        [data_normalised[:, 0][i + history_points].copy() for i in range(len(data_normalised) - history_points)])
    next_day_open_values_normalised = np.expand_dims(next_day_open_values_normalised,
                                                     axis=-1)  # add 'axis' parameter here
    next_day_open_values = np.array([data[:, 0][i + history_points].copy() for i in range(len(data) - history_points)])
    next_day_open_values = np.expand_dims(next_day_open_values, axis=-1)  # and here
    y_normaliser = preprocessing.MinMaxScaler()
    y_normaliser.fit(next_day_open_values)

    assert ohlcv_histories_normalised.shape[0] == next_day_open_values_normalised.shape[0]
    return ohlcv_histories_normalised, next_day_open_values_normalised, next_day_open_values, y_normaliser
    ohlcv_histories, next_day_open_values, unscaled_y, y_normaliser = csv_to_dataset('daily.csv')
    test_split = 0.9  # the percent of data to be used for testing
    n = int(ohlcv_histories.shape[0] * test_split)
    # splitting the dataset up into train and test sets
    ohlcv_train = ohlcv_histories[:n]
    y_train = next_day_open_values[:n]
    ohlcv_test = ohlcv_histories[n:]
    y_test = next_day_open_values[n:]
    unscaled_y_test = unscaled_y[n:]

    lstm_input = Input(shape=(history_points, 5), name='lstm_input')
    x = LSTM(50, name='lstm_0')(lstm_input)
    x = Dropout(0.2, name='lstm_dropout_0')(x)
    x = Dense(64, name='dense_0')(x)
    x = Activation('sigmoid', name='sigmoid_0')(x)
    x = Dense(1, name='dense_1')(x)
    output = Activation('linear', name='linear_output')(x)
    model = Model(inputs=lstm_input, outputs=output)
    adam = optimizers.Adam(lr=0.0005)
    model.compile(optimizer=adam, loss='mse')
    from keras.utils import plot_model
    # plot_model(model, to_file='model.png')
    model.fit(x=ohlcv_train, y=y_train, batch_size=32, epochs=50, shuffle=True, validation_split=0.1)
    evaluation = model.evaluate(ohlcv_test, y_test)
    print(evaluation)

csv_to_dataset("daily.csv")

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

chinaboy008 发表于 2023-9-12 21:03
目前也在学习py,学习下
wapjsx 发表于 2023-9-12 21:40
你的这个缩进是对的吗?

怎么感觉 函数自己调用自己  csv_to_dataset('daily.csv'),进行死循环了呢?

即:37行你应该需要顶格写的吧?
anawwy 发表于 2023-9-12 22:29
嗯,是死循环了,python讲究缩进来展示层级的
 楼主| jtwc 发表于 2023-9-12 22:58
wapjsx 发表于 2023-9-12 21:40
你的这个缩进是对的吗?

怎么感觉 函数自己调用自己  csv_to_dataset('daily.csv'),进行死循环了呢?

谢谢老师,
sai609 发表于 2023-9-12 23:02
得用break做运行停止
wHoRU 发表于 2023-9-13 08:01
咋有个import在57行呢,建议所有import都写在文件开头
小雨网络 发表于 2023-9-13 11:02
[Python] 纯文本查看 复制代码
import pandas as pd
import numpy as np
from sklearn import preprocessing
from keras.models import Model
from keras.layers import Dense, Dropout, LSTM, Input, Activation
from keras import optimizers

np.random.seed(4)

history_points = 50

def csv_to_dataset(csv_path):
    data = pd.read_csv(csv_path)
    data = data.drop('date', axis=1)
    data = data.drop(0, axis=0)
    data_normaliser = preprocessing.MinMaxScaler()
    data_normalised = data_normaliser.fit_transform(data)

    ohlcv_histories_normalised = np.array(
        [data_normalised[i: i + history_points].copy() for i in range(len(data_normalised) - history_points)])
    
    next_day_open_values_normalised = np.array(
        [data_normalised[:, 0][i + history_points].copy() for i in range(len(data_normalised) - history_points)])
    next_day_open_values_normalised = np.expand_dims(next_day_open_values_normalised, axis=-1)
    
    next_day_open_values = np.array([data[:, 0][i + history_points].copy() for i in range(len(data) - history_points)])
    next_day_open_values = np.expand_dims(next_day_open_values, axis=-1)

    y_normaliser = preprocessing.MinMaxScaler()
    y_normaliser.fit(next_day_open_values)

    assert ohlcv_histories_normalised.shape[0] == next_day_open_values_normalised.shape[0]
    
    return ohlcv_histories_normalised, next_day_open_values_normalised, next_day_open_values, y_normaliser

ohlcv_histories, next_day_open_values_normalised, next_day_open_values, y_normaliser = csv_to_dataset('daily.csv')
test_split = 0.9
n = int(ohlcv_histories.shape[0] * test_split)

ohlcv_train = ohlcv_histories[:n]
y_train = next_day_open_values_normalised[:n]
ohlcv_test = ohlcv_histories[n:]
y_test = next_day_open_values_normalised[n:]

lstm_input = Input(shape=(history_points, 5), name='lstm_input')
x = LSTM(50, name='lstm_0')(lstm_input)
x = Dropout(0.2, name='lstm_dropout_0')(x)
x = Dense(64, name='dense_0')(x)
x = Activation('sigmoid', name='sigmoid_0')(x)
x = Dense(1, name='dense_1')(x)
output = Activation('linear', name='linear_output')(x)

model = Model(inputs=lstm_input, outputs=output)
adam = optimizers.Adam(lr=0.0005)
model.compile(optimizer=adam, loss='mse')

model.fit(x=ohlcv_train, y=y_train, batch_size=32, epochs=50, shuffle=True, validation_split=0.1)
evaluation = model.evaluate(ohlcv_test, y_test)
print(evaluation)


修改了一下

免费评分

参与人数 1吾爱币 +1 热心值 +1 收起 理由
jtwc + 1 + 1 我很赞同!

查看全部评分

 楼主| jtwc 发表于 2023-9-13 14:59
小雨网络 发表于 2023-9-13 11:02
[mw_shl_code=python,true]import pandas as pd
import numpy as np
from sklearn import preprocessing
...

谢谢老师
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则

返回列表

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-11-24 19:35

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表