Coursera Machine Learning 第八周 quiz Unsupervised Learning时间:2021-03-25 15:29:27 1 point 1. For which of the following tasks might K-means clustering be a suitable algorithm? Select all that apply. 答案AB Given a database of information about your users, automatically group them into different market segments. Given sales data from a large number of products in a supermarket, figure out which products tend to form coherent groups (say are frequently purchased together) and thus should be put on the same shelf. Given historical weather records, predict the amount of rainfall tomorrow (this would be a real-valued output) Given sales data from a large number of products in a supermarket, estimate future sales for each of these products. 1 point 2. Suppose we have three cluster centroids μ1=[12] , μ2=[−30] and μ3=[42] . Furthermore, we have a training example x(i)=[−21] . After a cluster assignment step, what will c(i) be? 答案C c(i)=3 c(i) is not assigned c(i)=2 c(i)=1 1 point 3. K-means is an iterative algorithm, and two of the following steps are repeatedly carried out in its inner-loop. Which two? 答案AD The cluster assignment step, where the parameters c(i) are updated. Test on the cross-validation set. Randomly initialize the cluster centroids. Move the cluster centroids, where the centroids μk are updated. 1 point 4. Suppose you have an unlabeled dataset {x(1),…,x(m)} . You run K-means with 50 different random initializations, and obtain 50 different clusterings of the data. What is the recommended way for choosing which one of these 50 clusterings to use? 答案A For each of the clusterings, compute 1m∑mi=1||x(i)−μc(i)||2 , and pick the one that minimizes this. Always pick the final (50th) clustering found, since by that time it is more likely to have converged to a good solution. The only way to do so is if we also have labels y(i) for our data. The answer is ambiguous, and there is no good way of choosing. 1 point 5. Which of the following statements are true? Select all that apply. 答案BC Since K-Means is an unsupervised learning algorithm, it cannot overfit the data, and thus it is always better to have as large a number of clusters as is computationally feasible. For some datasets, the "right" or "correct" value of K (the number of clusters) can be ambiguous, and hard even for a human expert looking carefully at the data to decide. If we are worried about K-means getting stuck in bad local optima, one way to ameliorate (reduce) this problem is if we try using multiple random initializations. The standard way of initializing K-means is setting μ1=⋯=μk to be equal to a vector of zeros.