How to run Spark python code in Jupyter Notebook via command prompt

I am trying to import a data frame into spark using Python's pyspark module. For this, I used Jupyter Notebook and executed the code shown in the screenshot below

After that I want to run this in CMD so that I can save my python codes in text file and save as test.py (as python file). Then, I run that python file in CMD using python test.py command, below the screen shot:

So my task previously worked, but after 3 or 4 hours later I again tried to execute the same process, with no changes made...Then unfortunately I got the following error message:

Can someone please explain why this has happened?

Many thanks in advance

Topic data-engineering pyspark apache-spark python bigdata

Category Data Science


First time when you executed your code, you might have executed it from the same location where your te.csv file was present.

Are you executing this code in the same location as before? If not, then I suggest you put in and reference the absolute path of te.csv in your code. This should then run without a hitch.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.