Looking for data analysis techniques and approach

I'm new into ML and I need to do a data analysis on a dataset which I created myself but I don't know what techniques should I use exactly. Namely, I have a dataset with the following attributes: sensor_id,date,time,lat,log,temperature,noise,noise_dba,pm10,humidity,pm25,relative_humidity,wind_speed,sea_level_pressure,solar_elevation_angle,solar_radiation,pressure,snow,uv,wind_direction,visibility,clouds.

It's a dataset from IoT devices that measure noise, air pollution (pm10, pm25) and some other weather characteristics. What I want to achieve now is to prove whether the increased noise means increased air pollution (both pm10 and pm25 increased) or these attributes are not dependent at all and if so, which attributes play a key role in increase/decrease of air pollution. What I can recall now is to find the correlation between these and find the answer that way but since I'm having a will to transfer this into a paper (first one, if I manage to publish it successfully), I'd like to use some more techniques by this question and have more reliable and more mathematically proved evidence.

Judging by the research I have done, I have found a few techniques but I don't know if they'll fit so well in my problem. The ones I have now are conjoint analysis in machine learning, discriminant analysis, canonical correlation analysis, structural equation modeling, multidimensional scaling and cohort analysis. I don't have experience with most of these so what do you think? Are these good techniques for data analysis (especially the problem in my case)? If not, what techniques and approach for data analysis would you suggest me to do?

Topic data-analysis correlation machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.