吾爱破解 - 52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 784|回复: 14
收起左侧

[求助] py出现如下错误该如何处理呢?

[复制链接]
jtwc 发表于 2023-9-7 17:09
本帖最后由 jtwc 于 2023-9-7 17:11 编辑

各位老师,py出现如下错误该如何处理呢?谢谢了
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3361, in get_loc
    return self._engine.get_loc(casted_key)
  File "pandas\_libs\index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
  File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
  File "pandas\_libs\hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas\_libs\hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'open'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "E:/main_test.py", line 148, in <module>
    is_intraday=True, is_lack_margin=is_lack_margin, args=args)
  File "E:\environment.py", line 137, in __init__
    self.Data=preProcessData(data_fn)  
  File "E:\environment.py", line 51, in preProcessData
    df[newLabel] = df[cn]
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py", line 3458, in __getitem__
    indexer = self.columns.get_loc(key)
  File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3363, in get_loc
    raise KeyError(key) from err
KeyError: 'open'
[Python] 纯文本查看 复制代码
import numpy as npimport argparse
from copy import deepcopy
import random
import torch
from timeit import default_timer as timer

from evaluator import Evaluator
from rdpg import RDPG
from util import *
from environment import environment

torch.cuda.empty_cache()


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='PyTorch on Financial trading--iRDPG algorithm')
    
    ##### Model Setting #####
    # parser.add_argument('--rnn_mode', default='lstm', type=str, help='RNN mode: LSTM/GRU')
    parser.add_argument('--rnn_mode', default='gru', type=str, help='RNN mode: LSTM/GRU')
    parser.add_argument('--input_size', default=14, type=int, help='num of features for input state')
    parser.add_argument('--seq_len', default=15, type=int, help='sequence length of input state')
    parser.add_argument('--num_rnn_layer', default=2, type=int, help='num of rnn layer')
    parser.add_argument('--hidden_rnn', default=128, type=int, help='hidden num of lstm layer')
    parser.add_argument('--hidden_fc1', default=256, type=int, help='hidden num of 1st-fc layer')
    parser.add_argument('--hidden_fc2', default=64, type=int, help='hidden num of 2nd-fc layer')
    parser.add_argument('--hidden_fc3', default=32, type=int, help='hidden num of 3rd-fc layer')
    parser.add_argument('--init_w', default=0.005, type=float, help='initialize model weights') 
    
    ##### Learning Setting #####
    parser.add_argument('--r_rate', default=0.0001, type=float, help='gru layer learning rate')  
    parser.add_argument('--c_rate', default=0.0001, type=float, help='critic net learning rate') 
    parser.add_argument('--a_rate', default=0.0001, type=float, help='policy net learning rate (only for DDPG)')
    parser.add_argument('--beta1', default=0.3, type=float, help='mometum beta1 for Adam optimizer')
    parser.add_argument('--beta2', default=0.9, type=float, help='mometum beta2 for Adam optimizer')
    parser.add_argument('--sch_step_size', default=16*150, type=float, help='LR_scheduler: step_size')
    parser.add_argument('--sch_gamma', default=0.5, type=float, help='LR_scheduler: gamma')
    parser.add_argument('--bsize', default=100, type=int, help='minibatch size')
    
    ##### RL Setting #####
    parser.add_argument('--warmup', default=100, type=int, help='only filling the replay memory without training')
    parser.add_argument('--discount', default=0.95, type=float, help='future rewards discount rate')
    parser.add_argument('--a_update_freq', default=3, type=int, help='actor update frequecy (per N steps)')
    parser.add_argument('--Reward_max_clip', default=15., type=float, help='max DSR reward for clipping')
    parser.add_argument('--tau', default=0.002, type=float, help='moving average for target network')
    ##### original Replay Buffer Setting #####
    parser.add_argument('--rmsize', default=12000, type=int, help='memory size')
    parser.add_argument('--window_length', default=1, type=int, help='')  
    ##### Exploration Setting #####
    parser.add_argument('--ou_theta', default=0.18, type=float, help='noise theta of Ornstein Uhlenbeck Process')
    parser.add_argument('--ou_sigma', default=0.3, type=float, help='noise sigma of Ornstein Uhlenbeck Process') 
    parser.add_argument('--ou_mu', default=0.0, type=float, help='noise mu of Ornstein Uhlenbeck Process') 
    parser.add_argument('--epsilon_decay', default=100000, type=int, help='linear decay of exploration policy')
    
    ##### Training Trajectory Setting #####
    parser.add_argument('--exp_traj_len', default=16, type=int, help='segmented experiece trajectory length')  
    parser.add_argument('--train_num_episodes', default=2000, type=int, help='train iters each episode')  
    ### Also use in Test (Evaluator) Setting ###
    parser.add_argument('--max_episode_length', default=240, type=int, help='the max episode length is 240 minites in one day')  
    parser.add_argument('--test_episodes', default=243, type=int, help='how many episode to perform during testing periods')
    
    ##### PER Demostration Buffer #####
    parser.add_argument('--is_PER_replay', default=True, help='conduct PER momery or not')
    parser.add_argument('--is_pretrain', default=True, action='store_true', help='conduct pretrain or not')
    parser.add_argument('--Pretrain_itrs', default=10, type=int, help='number of pretrain iterations')
    parser.add_argument('--is_demo_warmup', default=True, action='store_true', help='Execute demonstration buffer')
    parser.add_argument('--PER_size', default=40000, type=int, help='memory size for PER')
    parser.add_argument('--p_alpha', default=0.3, type=int, help='the power of priority for each experience')
    parser.add_argument('--lambda_balance', default=50, type=int, help='priority coeffient for weighting the gradient term')
    parser.add_argument('--priority_const', default=0.1, type=int, help='priority constant for demonstration experiences')
    parser.add_argument('--small_const', default=0.001, type=int, help='priority constant for agent experiences')
    
    ##### Behavior Cloning #####
    parser.add_argument('--is_BClone', default=True, action='store_true', help='conduct behavior cloning or not')
    parser.add_argument('--is_Qfilt', default=False, action='store_true', help='conduct Q-filter or not')
    parser.add_argument('--use_Qfilt', default=100, type=int, help='set the episode after warmup to use Q-filter')
    parser.add_argument('--lambda_Policy', default=0.7, type=int, help='The weight for actor loss')
    # parser.add_argument('--lambda_BC', default=0.5, type=int, help='The weight for BC loss after Q-filter, default is equal to (1-lambda_Policy)')
    
    ##### Other Setting #####
    parser.add_argument('--seed', default=627, type=int, help='seed number')
    parser.add_argument('--date', default=629, type=int, help='date for output file name')
    parser.add_argument('--save_threshold', default=20, type=int, help='lack margin stop ratio')
    parser.add_argument('--lackM_ratio', default=0.7, type=int, help='lack margin stop ratio')
    parser.add_argument('--debug', default=True, dest='debug', action='store_true')
    parser.add_argument('--checkpoint', default="checkpoints", type=str, help='Checkpoint path')
    parser.add_argument('--logdir', default='log')
    parser.add_argument('--mode', default='test', type=str, help='support option: train/test')
    # parser.add_argument('--mode', default='train', type=str, help='support option: train/test')
    
    
    args = parser.parse_args()
    #######################################################################################################

    ####################################################################################################
    '''##### Run Task #####'''
    if args.seed > 0:
        np.random.seed(args.seed)
        random.seed(args.seed)

    is_lack_margin = True
    # is_lack_margin = False
    
    ##### Demonstration Setting #####
    if args.is_demo_warmup:
        data_fn = "data_preprocess/IF_tech_oriDT.csv"
        demo_env = environment(data_fn=data_fn, data_mode='random', duration='train', is_demo=True, 
                               is_intraday=True, is_lack_margin=is_lack_margin, args=args)
    else:
        demo_env = None
        
        
    ##### Run Training #####
    start_time = timer()
    if args.mode == 'train':
        print('##### Run Training #####')
        ### train_env setting ###
        data_mode = 'random'  # random select a day for a trading episode (240 minutes)
        duration = 'train'  # training period from 2016/1/1 to 2018/5/8
        
        data_fn = "data_preprocess/IF_prophetic.csv"
        train_env = environment(data_fn=data_fn, data_mode=data_mode, duration=duration, is_demo=False, 
                                is_intraday=True, is_lack_margin=is_lack_margin, args=args)
        
        ### Run training ###
        rdpg = RDPG(demo_env, train_env, args)
        rdpg.train(args.train_num_episodes, args.checkpoint, args.debug)
        
        end_time = timer()
        minutes, seconds = (end_time - start_time)//60, (end_time - start_time)%60
        print(f"\nTraining time taken: {minutes} minutes {seconds:.1f} seconds")
    
    ##### Run Testing #####
    elif args.mode == 'test':
        torch.cuda.empty_cache()
        print('##### Run Testing #####')
        ### test_env setting ###
        # is_demo = True
        is_demo = False  
        data_mode = 'time_order'  
        duration = 'test'  # testing period from 2018/5/9 to 2019/5/8
        is_lack_margin = True
        
        # data_fn = "data_preprocess/IF_prophetic.csv"
        data_fn = "data_preprocess/IC_prophetic.csv"
        test_env = environment(data_fn=data_fn,  data_mode=data_mode, duration=duration, is_demo=is_demo, 
                                is_intraday=True, is_lack_margin=is_lack_margin, args=args)
        rdpg = RDPG(demo_env, test_env, args)
        

        description = 'iRDPG_agent' 
        model_fn = description +'.pkl'
        rdpg.test(args.checkpoint, model_fn, description, lackM=is_lack_margin, debug=args.debug)
                
            
        end_time = timer()
        minutes, seconds = (end_time - start_time)//60, (end_time - start_time)%60
        print(f"\nTesting time taken: {minutes} minutes {seconds:.1f} seconds")
        
    else:
        raise RuntimeError('undefined mode {}'.format(args.mode))


免费评分

参与人数 1吾爱币 +1 热心值 +1 收起 理由
shiyezhiping + 1 + 1 我很赞同!

查看全部评分

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

shoot82003 发表于 2023-9-7 17:45
[Asm] 纯文本查看 复制代码
File "E:\environment.py", line 51, in preProcessData
    df[newLabel] = df[cn]

尝试使用open这个键来访问DataFrame中的列时DataFrame中没有名为open的列引发KeyError异常。
确保DataFrame中包含名为open列或检查列名拼写

免费评分

参与人数 1吾爱币 +1 热心值 +1 收起 理由
jtwc + 1 + 1 我很赞同!

查看全部评分

最初的未来 发表于 2023-9-7 17:47
 楼主| jtwc 发表于 2023-9-7 17:51
最初的未来 发表于 2023-9-7 17:53
jtwc 发表于 2023-9-7 17:51
chat  GPT就是一本正经的胡说

论胡说, 你的看 文心   一言
打金者BT 发表于 2023-9-7 17:55
本帖最后由 打金者BT 于 2023-9-7 17:57 编辑

就这点代码能看出啥.....
这个“open”是DataFrame数据中的某列的列名吗?建议确认一下这个列是否存在
 楼主| jtwc 发表于 2023-9-7 18:16
shoot82003 发表于 2023-9-7 17:45
[mw_shl_code=asm,true]File "E:\environment.py", line 51, in preProcessData
    df[newLabel] = df[cn ...

老师,具体如何操作呢?
 楼主| jtwc 发表于 2023-9-7 18:17
打金者BT 发表于 2023-9-7 17:55
就这点代码能看出啥.....
这个“open”是DataFrame数据中的某列的列名吗?建议确认一下这个列是否存在

老师,“open”是DataFrame数据中的某列的列名,这个列是存在的
ccber 发表于 2023-9-7 18:17
你的表格列是不是带了空格之类的
打金者BT 发表于 2023-9-7 18:24
jtwc 发表于 2023-9-7 18:17
老师,“open”是DataFrame数据中的某列的列名,这个列是存在的

报错前一行直接把df打印出来看看吧
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则

返回列表

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-11-24 19:55

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表