Genre: eLearning | MP4 | Video: h264, 1280x720 | Audio: aac, 44100 Hz
Language: English | VTT | Size: 4.76 GB | Duration: 6 section | 63 lectures | (9h 47m)
The future world is the AI era of machine learning, so mastering the application of machine learning is equivalent to getting a key to the future career.
What you'll learn
How is xgboost algorithm working to predict different model targets
What are the roles that decision trees play in gradient boost and Xgboost modeling
Why XGBoost is so far one of the most powerful and stable machine learning methods in Kaggle contests
How to explain and set appropriate Xgboost modeling parameters
How to apply data exploration, cleaning and preparation for Xgboost method
How to effectively implement the different types of xgboost models using the packages in Python
How to perform feature eeering in Xgboost predictive modeling
How to conduct statistical analysis and feature selection in Xgboost modeling
How to explain and select the typical evaluation measures and model objectives for building Xgboost models
How to perform cross validation and detee the best parameter thresholds
How to proceed parameter tuning in Xgboost model building
How to successfully apply Xgboost into solving various machine learning problems
Basic math background
Basic computer skills
If you can only learn one tool or algorithm for machine learning or building predictive models now, what is this tool? Without a doubt, that is Xgboost! If you are going to participate in a Kaggle contest, what is your preferred modeling tool? Again, the answer is Xgboost! This is proven by countless experienced data scientists and new comers. Therefore, you must register for this course!
The Xgboost is so famous in Kaggle contests because of its excellent accuracy, speed and stability. For example, according to the survey, more than 70% the top kaggle winners said they have used XGBoost.
The Xgboost is really useful and performs manifold functionalities in the data science world; this powerful algorithm is so frequently utilized to predict various types of targets - continuous, binary, categorical data, it is also found Xgboost very effective to solve different multiclass or multilabel classification problems. In addition, the contests on Kaggle platform covered almost all the applications and industries in the world, such as retail business, banking, insurance, pharmaceutical research, traffic control and credit risk management.
The Xgboost is powerful, but it is not that easy to exercise it full capabilities without expert's guidance. For example, to successfully implement the Xgboost algorithm, you also need to understand and adjust many parameter settings. For doing so, I will teach you the underlying algorithm so you are able to configure the Xgboost that tailor to different data and application scenarios. In addition, I will provide intensive lectures on feature eeering, feature selection and parameters tuning aiming at Xgboost. So, after training you should also be able to prepare the suitable data or features that can well feed the XGBoost model.
This course is really practical but not lacking in theory; we start from decision trees and its related concepts and components, transferring to constructing the gradient boot methods, then leading to the Xgboost modeling. The math and statistics are mildly applied to explain the mechanisms in all machine learning methods. We use the Python pandas data frames to deal with data exploration and cleaning. One significant feature of this course is that we have used many Python program examples to demonstrate every single knowledge point and skill you have learned in the lecture.
Who this course is for:
Anyone who enjoys the Kaggle contests
Anyone who wishes to learn how to apply machine learning and data science approaches into business