Decision tree confusion matrix python. Create and build the machine learning model.

Step 5. For example, if you are trying to detect fraud and only 1 out of 1,000 transactions are fraudulent, even if you predict every case as having no fraud, you will still have a model that is 99. tree import DecisionTreeClassifier. drop(['score_goal'], axis=1 Feb 8, 2021 · The decision tree comes in the CART (classification and regression tree) algorithm that is an optimized version in sklearn. confusion_matrix(test_labels,pred) print(cm) plt. Aug 15, 2023 · The Decision Tree algorithm will learn patterns and decision rules based on the features to classify transactions as either fraudulent or legitimate. array(y_test). Creating a Confusion Matrix. argmax(axis=1) predictions = np. This is the key to the confusion matrix. As input it takes your predictions and the correct values: #code #precision #recall #accuracy #MCC #sklearn #fmeasuresIn this tutorial, we'll look at how to code out the confusion matrix and the basic metrics like Ac Jan 12, 2021 · Let’s see how we can calculate precision and recall using python on a classification problem. 9, size = 1000) Mar 5, 2013 · The confusion matrix is Weka reporting on how good this J48 model is in terms of what it gets right, and what it gets wrong. Check out the course here: https://www. Essentially, this tells us how we performed in terms of false May 27, 2017 · filename: filename of figure file to save. It has easy-to-use functions to assist with splitting data into training and testing sets, as well as training a model, making predictions, and evaluating the model. Oct 18, 2020 · from sklearn. When there are more than two potential outcomes, we simply extend the number of columns and rows in the confusion matrix. Oct 16, 2019 · 13. Ham/Spam) and I want to identify them seperately from each other. This is the way we keep it in this chapter of our Feb 6, 2024 · Decision Tree is one of the most powerful and popular algorithms. A confusion matrix presents a table layout of the different outcomes of the prediction and results of a classification problem and helps visualize its outcomes. But with calculating the confusion matrix, it takes too much time to compute for each threshold. The non-parametric means that the data is distribution-free i. Tabel ini menggambarkan lebih detail tentang jumlah data yang diklasifikasikan dengan benar maupun salah. You need to cast to an rdd and map to tuple before calling metrics. imshow(cm, cmap='binary') This is how my confusion matrix looks like: Oct 6, 2020 · I'm trying to plot a ROC Curve for Decision tree. Decision Tree is one of the powerful algorithms that come under the non-parametric Supervised Learning Technique. I am trying to predict for a binary outcome using logistic regression in Python and my classification_report shows that my model is predicting at a 0% precision for my target variable=0. When we talk about a confusion matrix, it is always in the classification problem context. Jan 19, 2023 · Step 5 - Creating Classification Report and Confusion Matrix. It plots a table of all the predicted and actual values of a classifier. A Decision Tree can be used for Regression and Classification tasks alike. May 12, 2023 · A confusion matrix is a commonly used tool in machine learning to evaluate the performance of a classification model. pyplot as plt. For 2 classes, we get a 2 x 2 confusion matrix. Nov 16, 2019 · In this post, I will discuss how to use Python to code a decision trees and the dangers that can occur using decision trees. In the confusion_matrix() function, the first variable is the true label distribution and the second is the predicted label distribution. Note that the confusion matrix printed here is the transposed version of what we have been using as an example throughout the article. Here are some real-world or business use cases where a confusion matrix can be helpful: Fraud Detection: A bank uses a machine learning model to identify fraudulent transactions. clf=clf. This tells us how many of the values we predicted to be in a certain class are actually in that class. Aug 23, 2016 · Returns the mean accuracy on the given test data and labels. The confusion matrix is a N x N matrix, where N is the number of classes or outputs. random. That is, in this Python version, rows represent the expected class labels, and columns represent the predicted class labels. Final Thoughts. each label set be correctly predicted. from sklearn import datasets. Each row of the confusion matrix represents the instances of an actual class and each column represents the instances of a predicted class. toArray(). fit (X_train,y_train) #Predict the response for test dataset. ” The Random Forest Algorithm consists of the following steps: 1. Metrics such as accuracy, precision, lift and F scores use values from both columns of the confusion matrix. 20. confusion_matrix function. array(y_pred). Sep 13, 2022 · Example of the confusion_matrix function of Python scikit-learn. Added in version 1. Step 3: Put these value in Bayes Formula and calculate posterior probability. While being a fairly simple algorithm in itself, implementing decision trees with Scikit-Learn is even easier. 0, np. It is a table that summarizes the performance of a classifier on a particular dataset by showing the number of true positives, true negatives, false positives, and false negatives. y_pred = clf. It can only be determined if the true values for test data are known. labels: string array, name the order of class labels in the confusion matrix. Model Training: Train the Decision Tree model on the training data, using a suitable metric such as Information Gain or Gini Impurity to determine the best feature to split the data at each node. nan}, default=”warn”. Aug 20, 2019 · 0. class, so you need to override the default option by type='class' – In this video we use SkLearn's confusion matrix and confusion plot to help us understand where our machine learning model is making errors. Nov 7, 2022 · Decision Tree Algorithm in Python. Prepare the data. udacity. If the issue persists, it's likely a problem on our side. For example, if your confusion matrix looks like this: Then what you're looking for, per class, can be found like this: Using pandas/numpy, you can do this for all classes at once like so: Click here to buy the book for 70% off now. if not None, map the labels & ys to more understandable strings. A confusion matrix is used to evaluate the accuracy of your classification model. predict (X_test) 5. ----------. Here’s how to build one in Python and read it. Confusion Matrix 講解. These are non-parametric supervised learning. A confusion matrix is a table that is used to evaluate the performance of a classification model. prediction = clf. metrics module. nan option was added. The random forest is a machine learning classification algorithm that consists of numerous decision trees. This course was designed Nov 16, 2023 · In this article we showed how you can use Python's popular Scikit-Learn library to use decision trees for both classification and regression tasks. Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The confusion matrix shows the ways in which your classification model. The left tree is the decision tree we obtain from using information gain to split the nodes and the right tree is what we obtain from using the phi function to split the nodes. Load the data set using the read_csv () function in pandas. one great metric to use is the ROC-AUC curve and a confusion matrix. Q2. Jun 30, 2021 · 4. Let's look at the program again, but this time we'll generate a matrix called a confusion matrix that shows Jun 27, 2024 · Coding an LGBM in Python. Unexpected token < in JSON at position 4. Moreover, when building each tree, the algorithm uses a random sampling of data points to train Aug 1, 2020 · The confusion matrix provides more insight into not only the performance of a predictive model, but also which classes are being predicted correctly, which incorrectly, and what type of errors are being made. 1. An alternative is to 'plot' the confusion matrix. Apr 6, 2021 · Accuracy can be calculated using the values from the confusion matrix: The trouble comes when you have imbalanced classes in your response variable. actual = numpy. The first row is for transactions whose A decision tree classifier. In this article, We are going to implement a Decision tree in Python algorithm on the Balance Scale Weight & Distance Feb 27, 2024 · We will assess the performance of a decision tree model using the same metrics we used for the logistic regression in Part 6 of this series: confusion matrix, misclassification rates, and a ROC (Receiver Operating Characteristic) plot. In general, if you do have a classification task, printing the confusion matrix is a simple as using the sklearn. y_pred : It this parameter we have to pass the predicted output of model. One factor that I should have mentioned in my original question is that I am classifying textual data under binary labels. Oct 10, 2023 · If the proportion of positive to negative instances changes in a test set, the ROC curves will not change. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. The number of correct and incorrect predictions are summarized with count values and broken down by each class. Sep 1, 2021 · Logistic regression is a type of regression we can use when the response variable is binary. Here is an example: import matplotlib. confusionMatrix(). data[removed]) # assign removed data as input. Figure 1: Basic layout of a Confusion Matrix. 3. Explore and run machine learning code with Kaggle Notebooks | Using data from [Private Datasource] Jul 5, 2022 · A confusion matrix is a matrix (table) that can be used to measure the performance of an machine learning algorithm, usually a supervised learning one. Step 3. 3 documentation; 第一引数に実際のクラス(正解クラス)、第二引数に予測したクラスのリストや配列を指定する。 May 14, 2024 · Decision Tree is one of the most powerful and popular algorithms. confusion_matrix(y_true, y_prediction), but that just shifts the problem. Nov 12, 2019 · Confusion Matrix pada Python. Decision Trees #. import pandas as pd. If you want to obtain confusion matrices for multiple evaluation runs (such as cross validation) you have to do this by hand, which is not that bad in scikit-learn - it is actually a few lines of code. A Decision Tree algorithm is a supervised learning algorithm for classification and regression tasks. 2. Feb 27, 2018 · I using two different classifiers to predict a binary target (Random Forests and Decision Trees). Feb 23, 2016 · I use scikit-learn's confusion matrix method for computing the confusion matrix. X : array-like, shape = (n_samples, n_features) Test samples. Sep 23, 2019 · Closed 5 months ago. As its name suggests, it is actually a "forest" of decision trees. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. content_copy. Objective: infer class labels; Able to caputre non-linear relationships between features and labels; Don't require feature scaling(e. The following code works for me: def plot_roc(model, X_test, y_test): # calculate the fpr and tpr for all thresholds of the classification. Displaying the Confusion Matrix using Matplotlib and Seaborn. Measure the performance of the model. The decision tree decides by choosing the root node and split further into Mar 7, 2024 · Step 4. metrics import confusion_matrix cm_train = confusion_matrix(y_train, y_train_pred) cm_test = confusion_matrix(y_test, y_test_pred) Then you can just print cm_train and cm_test. Confusion matrix adalah salah satu tools analitik prediktif yang menampilkan dan membandingkan nilai aktual atau Jul 1, 2020 · Calculate the Confusion Matrix. So when I use sci-kits confusion matrix I get a four by four matrix. predict(test_matrix) cm=metrics. Jan 22, 2022 · The Random Forest Algrothim builds different decision trees on a randomly selected dataset and takes one of the decision trees based on the majority voting. . Creating and Displaying Confusion Matrix in Python In machine learning, confusion matrix is an essential evaluation metric. You have to split you data set into two parts. Nov 7, 2023 · First, we’ll import the libraries required to build a decision tree in Python. Look at the first row. 3: np. The precision is intuitively the ability of the First Approach (In case of a single feature) Naive Bayes classifier calculates the probability of an event in the following steps: Step 1: Calculate the prior probability for given class labels. Confusion matrixes can be created by predictions made from a logistic regression. Here, the class -1 is to be considered as the negatives, while 0 and 1 are variations of positives. It is a good measure of whether models can account for the overlap in class properties and understand which classes are most easily Jun 3, 2018 · The confusion matrix is computed by metrics. New nodes added to an existing node are called child nodes. species = np. Apr 18, 2019 · 混同行列を生成: confusion_matrix() scikit-learnで混同行列を生成するにはconfusion_matrix()を用いる。 sklearn. Each decision tree in the random forest contains a random sampling of features from the data set. Apr 17, 2020 · To use a confusion matrix in machine learning in 4 steps: Train a machine learning model. Confusion Matrix 對於分類問題很重要,因為評估模型預測的分類情形,實際情況為正或負,預測情況為正或負,總共會產生四種情況如下,舉例來說其中 TP就是指被模型預測為正的正樣本(預測正確)。 Dec 24, 2023 · Confusion Matrix of our Decision Tree model. The topmost node in a decision tree is known as the root node. datasets import load_iris from sklearn. evaluation. Visualize the results. DecisionTreeClassifier() # defining decision tree classifier. To install the LightGBM model, you can use the Python pip function by running the command “pip install lightgbm. Getting a classification report via scikit-learn. We can figure that out using a confusion matrix. This can be done using any machine learning algorithm, such as logistic regression, decision tree, or random forest. You only need to pass the fitted estimator ( clf in your case) along with the input ( X_test) and the true target values ( y_test ). The task is to predict the state given some attributes or independent variables. model1 = LogisticRegression() model1 = model1. fit(new_data,new_target) # train data on new data and new target. #Importing the required libraries. Example of confusion matrix usage to evaluate the quality of the output of a classifier on the iris data set. Should the entire first row be 0s other than the class A as it should always predict class A or am I missing something. Feb 24, 2021 · A confusion matrix shows the combination of the actual and predicted classes. Now, you know which values are what! fig 2: TP, TN, FP, FN values of our model prediction. I am new to machine learning and coding in general, and am trying to understand the confusion matrix. We’ll make use of sklearn. ymap: dict: any -> string, length == nclass. From the official documentation, class pyspark. Decision region: region in the feature space where all instances are assigned to one class label Jul 5, 2024 · The confusion Matrix gives a comparison between actual and predicted values. The question arose of what kind of mistakes it makes, if any. g. Dec 7, 2020 · The final step is to use a decision tree classifier from scikit-learn for classification. This is a dataset of data that the model has not been trained on. The confusion matrix is a matrix used to determine the performance of the classification models for a given set of test data. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see Mathematical Sep 21, 2016 · to add, if the rpart object is a classification tree, then the default type is 'prob', which returns prob predictions, a matrix whose columns are the probability of the first, second, etc. Mar 24, 2016 · Alessandro gave good advice by informing me that Y_test != pred would return all of my false positives/negatives in the confusion matrix. Create confusion matrix. You can use Grid Search to find a range of different class weightings for the weighted A decision tree is a flowchart-like tree structure where an internal node represents a feature (or attribute), the branch represents a decision rule, and each leaf node represents the outcome. Feb 9, 2023 · Introduction. Nov 16, 2023 · In this in-depth hands-on guide, we'll build an intuition on how decision trees work, how ensembling boosts individual classifiers and regressors, what random forests are and build a random forest classifier and regressor using Python and Scikit-Learn, through an end-to-end mini-project, and answer a research question. Separate the independent and dependent variables using the slicing method. So is there a better way to compute the matrix. # import the metrics class from sklearn import metrics cnf_matrix = metrics Mar 27, 2024 · This is where confusion matrices are useful. The function to measure the quality of a split. predict(iris. SyntaxError: Unexpected token < in JSON at position 4. An example of a decision tree is a flowchart that helps a person decide what to wear based on the weather conditions. Dec 11, 2019 · Building a decision tree involves calling the above developed get_split () function over and over again on the groups created for each node. accuracy_score. The columns tell you how your model Nov 16, 2023 · Scikit-Learn, or "sklearn", is a machine learning library created for Python, intended to expedite machine learning tasks by making it easier to implement machine learning algorithms. For example, for predicting the binary value using random forests I've: training_features, test_features, training_target, test_target, = train_test_split(df. In multi-label classification, this is the subset accuracy. keyboard_arrow_up. Parameters. binomial(1, 0. Refresh. Pada bagian ini saya akan memberikan contoh bagaimana cara membuat model sederhana untuk prediksi dan menampilkan confusion matrixnya untuk menghitung beberapa performance metrics pada python. It is a tree-based algorithm that divides the entire dataset into a tree-like structure based on certain conditions. This type of bagging classification can be done manually using Scikit-Learn's BaggingClassifier meta-estimator, as shown here: In this example, we have randomized the data by fitting each estimator with a random subset of 80% of the training points. Next we will need to generate the numbers for "actual" and "predicted" values. You cannot do this with confusion matrix which, again as name suggests, is a matrix. The simplest confusion matrix is for a two-class classification problem, with negative (class 0) and positive (class 1) classes. #. which is a harsh metric since you require for each sample that. Then, we split the trees into groups according to their decisions. Jan 11, 2021 · Confusion matrix adalah sebuah tabel yang sering digunakan untuk mengukur kinerja dari model klasifikasi di machine learning. It works for both continuous as well as categorical output variables. Standardization) Decision Regions. Display the top five rows from the data set using the head () function. use `clf. Let’s take an example of binary classification (two-class problem). mllib. # define confusion matrix. You can use plot_confusion_matrix to visually represent a confusion matrix. The relative contribution of precision and recall to the F1 score are equal. A tree can be seen as a piecewise constant approximation. EDIT after @seralouk's answer. Step 2: Find Likelihood probability with each attribute for each class. with shape (nclass,). 9% Dec 30, 2023 · Python Examples for Micro-averaging & Macro-averaging Methods. It’s typically used for binary classification problems but can be used for multi-label classification problems by simply binarizing the output. Read more in the User Guide. 0, 1. It learns to partition on the basis of the attribute value. It can be used to predict the outcome of a given situation based on certain input parameters. We’ve reached the conclusion of this guide on Decision Trees in Python using scikit-learn with the Iris Dataset. Parameters: criterion{“gini”, “entropy”, “log_loss”}, default=”gini”. Parameters: predictionAndLabels – an RDD of (prediction, label) pairs. sum(cm_decision_tree, axis=0) print(cm_decision_tree_summed) # [[ 9 53] # [17 24]] Please note that this summed confusion matrix inherently contains True Positives, False Positives and the like, which is why there is nothing additional to compute. //Decision Tree Python – Easy Tutorial. # Create Decision Tree classifier object. The confusion matrix helps data scientists identify the strengths […] Sep 27, 2023 · To calculate the sum of the confusion matrices, you can do: cm_decision_tree_summed = np. A decision tree is a tree-like structure that represents a series of decisions and their possible consequences. classes_` if using scikit-learn models. Both models operate similarly. I have attached the decision tree and the exact wording of the question below. As a class distribution changes these measures will change as well, even if the fundamental classifier performance does not. #train classifier. model_selection import cross_val_score from This video is part of an online course, Intro to Machine Learning. clf = tree. For now we will generate actual and predicted values by utilizing NumPy: import numpy. On the top-left square we can see that for the 5 setosa irises, the Decision Tree has Confusion Matrix . In our May 30, 2022 · I adjusted the code snippet and I've got accuracies on iris dataset. accuracy_score(test_lab, test_pred_decision_tree) #out: 0. In order to create a confusion matrix having Jan 19, 2021 · After your transformation (label encoder), all your features are numerical, so the decision will use this numerical values of the features (included outlook) To sump up, when you are using a decision tree form sklearn, all the features and split will be based on numerical values . com/course/ud120. The formula for the F1 score is: F1 = 2 ∗ TP 2 ∗ TP + FP + FN. Sequence of if-else questions about individual features. The F1 score can be interpreted as a harmonic mean of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. You should perform a cross validation if you want to check the accuracy of your system. (E. e the variables are nominal or ordinal. target_names : In this parameter we have to pass the names of Previously we saw a logistic regression model that can predict grape variety from various measurements. Kita membutuhkan library scikit-learn untuk menghasilkan confusion matrix, jadi pastikan anda telah menginstallnya terlebih Jun 3, 2020 · Classification-tree. model_selection import train_test_split. Make predictions on a test dataset. Oct 27, 2019 · I am using scikit learns decision tree to classify a set of data into one of four categories. One common way to evaluate the quality of a logistic regression model is to create a confusion matrix, which is a 2×2 table that shows the predicted values from the model vs. These measures help us determine how well our decision tree model fits the data. Feb 26, 2019 · 1. Where TP is the number of true positives, FN is the Aug 19, 2019 · The Confusion-matrix yields the most ideal suite of metrics for evaluating the performance of a classification algorithm such as Logistic-regression or Decision-trees. A confusion matrix is a summary of prediction results on a classification problem. Topics to be reviewed: Creating a Confusion Matrix using Pandas. Sets the value to return when there is a zero division. linear_model import LogisticRegression. When you understand this, rest of the things are just simple math. Thanks! "(iv) Present a Jan 12, 2022 · Decision Tree Python - Easy Tutorial. The confusion matrix above is made up of two axes, the y-axis is the target, the true value for the species of the iris and the x-axis is the species the Decision Tree has predicted for this iris. Compute the precision. The AdaBoost model makes predictions by having each tree in the forest classify the sample. In your data, the target variable was either "functional" or "non-functional;" the right side of the matrix tells you that column "a" is functional, and "b" is non-functional. Now I want to evaluate my model creating a confusion matrix. Oct 16, 2021 · The weighted decision tree is expected to perform better even if it is slight. The internal nodes represent the features or attributes of the Dec 6, 2022 · A random forest is an ensemble method called Bootstrap Aggregation or bagging that uses multiple decision trees to make decisions. the actual values from the test dataset. It is used in machine learning for classification and regression tasks. Construct a confusion matrix. Nov 29, 2022 · Step 2. Moreover, LGBM features custom API support, enabling the implementation of both Classifier and regression algorithms. Precision. It is predicting at an 87% precision for my target variable=1. For 3 classes, we get a 3 X 3 confusion matrix. Let us first have a look on the parameters of Classification Report: y_true : In this parameter we have to pass the true target values of the data. let's take a closer look at confusion Jul 9, 2021 · 1. Jul 17, 2023 · An Introduction to the Confusion Matrix in Python. Now assume the classification results from both trees are given using a confusion matrix. Confusion Matrix 講解及實作: 4. For more information on the implementation of decision trees, check out our article “Implementing Decision Tree Using Python. sklearn. ”. Returns: reportstr or dict. clf = DecisionTreeClassifier () # Train Decision Tree Classifier. The Decision Tree classification algorithm is a tree-based model that consists of internal nodes, branches, and leaves. Jul 10, 2015 · For the multi-class case, everything you need can be found from the confusion matrix. For each group, we add up the significance of every tree inside the group. The diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier. We made only two mistakes, the two 1 outside the diagonal. metrics. 10. Oct 8, 2021 · Performing The decision tree analysis using scikit learn. In this article, We are going to implement a Decision tree in Python algorithm on the Balance Scale Weight & Distance May 4, 2021 · I understand how to create a decision tree for an entire model, but I am unsure on how to create one for just one variable. accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None) [source] #. argmax(axis=1) confusion_matrix(species Nov 16, 2020 · sum(diagonals in the confusion matrix) / sum (all boxes in the confusion matrix) metrics. The resulting tree from using information gain to split the nodes. Mar 2, 2019 · Confusion matrix of the Decision Tree on the testing set. 4. Accuracy classification score. You can also visualize the performance of an algorithm. You cannot directly calculate RoC curve from confusion matrix because AUC - ROC curve is a performance measurement for classification problem at various thresholds settings. Text summary of the precision, recall, F1 score for each class. Jul 27, 2019 · That being said, the numbers on the diagonal of the confusion matrix correspond to correct predictions. clf = clf. The algorithm creates a model of decisions based on given data, which Nov 11, 2020 · Understanding the confusion matrix: Let’s take the confusion matrix of the XGBoost model as an example. 9833333333333333. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. A node may have zero children (a terminal node), one child (one side makes a prediction directly) or two child nodes. . If set to “warn”, this acts as 0, but warnings are also raised. It is possible that better performance can be achieved with a different class weighting, and this too will depend on the choice of performance metric used to evaluate the model . The fundamental of a confusion matrix is the number of correct and incorrect predictions summed up class-wise. Step 4. The first one is used to learn your system. We call it a "random" forest since it: Randomly samples the training dataset to build a tree. Then you perform the prediction process on the second part of the data set and compared the predicted results with the good ones. MulticlassMetrics (predictionAndLabels) [source] Evaluator for multiclass classification. In this short tutorial, you’ll see a full example of a Confusion Matrix in Python. It is used for the optimization of machine learning models. fit(matrix, labels) pred = model1. precision_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] #. clf = DecisionTreeClassifier() May 31, 2024 · A. confusion_matrix — scikit-learn 0. Apr 8, 2024 · Let us first plot the confusion matrix of the decision tree classifier. The confusion matrix is an N x N matrix used to summarize the predicted results and actual results of the test, where N is the number of outcomes of the test. from sklearn. Python Decision-tree algorithm falls under the category of supervised learning algorithms. Here is the Python code sample representing the calculation of micro-average and macro-average precision & recall score for model trained on SkLearn IRIS dataset which has three different classes namely, setosa, versicolor, virginica. Each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class. May 15, 2019 · Step 7: Use the forest of decision trees to make predictions on data outside of the training set. Here we have binary or two states of a variable known as the target variable. This step defines, then prints, a simple confusion matrix using the loaded list values. The confusion matrix helps the bank understand An ensemble of randomized decision trees is known as a random forest. zero_division{“warn”, 0. Create and build the machine learning model. jx gr qn jw hq qz jm up jz xf