Skip to content

Set trainer_config = {'logger':none} not work #1621

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Aaron1993 opened this issue Jul 31, 2024 · 14 comments
Open

Set trainer_config = {'logger':none} not work #1621

Aaron1993 opened this issue Jul 31, 2024 · 14 comments
Labels
priority:P1 High priority status:awaiting-team-response type:bug Something isn't working

Comments

@Aaron1993
Copy link

when I trained the NeuralProphet model, I won't save the log file named 'lightning_logs', so I set the trainer_config = {'logger': None/False}, and it still saved the log file. How can I resolve this problem?

@ourownstory
Copy link
Owner

Hi @Aaron1993 Can you please provide some details on what behavior you are experiencing vs expecting?

@samialitop3
Copy link

I am also facing the same issue when trying to run the NeuralProphet model inside AWS Lambda function.

AWS Lambda function do not allow writing files except for the /tmp directory, so I have tried to set the trainer_config default_root_dir to the /tmp directory but it did not work.

Here is how I configured the NeuralProphet model

        m = NeuralProphet(
            normalize="off",
            daily_seasonality=True,
            trainer_config={"default_root_dir": "/tmp"},
        )

and I get this error when I access the Lambda function logs

OSError: [Errno 30] Read-only file system: '/var/task/lightning_logs'

@MaiBe-ctrl MaiBe-ctrl added type:bug Something isn't working priority:P1 High priority status:awaiting-team-response labels Aug 22, 2024
@samialitop3
Copy link

samialitop3 commented Aug 23, 2024

What I expect when I set trainer_config = {'logger': None/False} is to disable the pytorch_lightning logger or setting the default_root_dir to generate the lightning_logs in the specified directory.

@WuttsGood
Copy link

WuttsGood commented Aug 23, 2024

I'm also running into this issue. Is there a way to create the lightning_logs folder in a different directory?

@jennank
Copy link

jennank commented Aug 23, 2024

Bump on this, I am also having this issue. Is there a way to turn off the logging to also get around this?

@parksjr5
Copy link

parksjr5 commented Sep 4, 2024

Bumping, thanks!

@samialitop3
Copy link

Bumping on this!

@baylenspringer-readysignal

I am experiencing this issue as well. Bumping, thanks!

@samialitop3
Copy link

Bumping on this!

@samialitop3
Copy link

@ourownstory Can you please check this for any possible fixes?

@sugyeong-jo
Copy link

I would like to solve this problem too. I would like to use this package multi-threaded, but this folder creation seems to be preventing it.

@csaid
Copy link

csaid commented Apr 2, 2025

I'm experiencing this when running NeuralProphet in Snowflake's Snowpark, which will only let me write to the /tmp folder. The error I'm getting is:

OSError: [Errno 30] Read-only file system: '/.lr_find_494580fa-8958-4d83-ae9c-ae202c99ae01.ckpt'

Snowpark lists some ways for common libraries to write specifically to the /tmp folder. https://docs.snowflake.com/en/user-guide/ui-snowsight/notebooks-troubleshoot#read-only-file-system-issue

Could we have a way to do this in NeuralProphet?

@crocoroc0
Copy link

For those running NeuralProphet in read-only environments (like AWS Lambda) and encountering ReadOnlyFileSystemError:

The core workaround is to temporarily switch the Current Working Directory (CWD) to a writable path (e.g., /tmp/some_dir) just before initialising NeuralProphet, and then switch it back immediately after.

You also need to set default_root_dir. This is what worked for me:

temp_np_workdir = "/tmp/neuralprophet_init_workdir" # Writable temporary directory

try:
    os.makedirs(temp_np_workdir, exist_ok=True)
    os.chdir(temp_np_workdir)

    # Configure PyTorch Lightning to use /tmp/ for its default outputs
    trainer_config_for_tmp = {
        "default_root_dir": "/tmp/ptl_trainer_output/"
    }

    m = NeuralProphet(
        trainer_config=trainer_config_for_tmp
        # ... other NeuralProphet parameters
    )

finally:
    os.chdir(original_cwd) 

@csaid
Copy link

csaid commented May 20, 2025

Thank you @crocoroc0 ! That worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority:P1 High priority status:awaiting-team-response type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests