The density of a finite mixture distribution has the form
p(x) = KXi =1 πifi(x; θi)
where fi(:) are the K component densities, and πj are mixing proportions. For fixed K, the EM algorithm (see lecture slides) can be
used to estimate the parameters, θi, πi, for i = 1; : : : K, from an iid
sample. In this question we will restrict to all component densities
being p-dimensional normal, with density
现在提到了代写服务,肯定很多人都不会觉得陌生,就算是国内也是有着专业代写作业的服务行业的,能够为有需求的学生提供很多的帮助,不过其实代写机构在国外会更获得学生的支持,这是因为国外的学校对于平时的作业要求比较严格,为了获得更高的分数顺利毕业,不少留学生就会让代写机构帮忙完成作业,比较常见的作业代写类型,就是计算机专业了,因为对于留学生来说这个技术对于Machine Learning或者AI的代码编程要求更高,所以找代写机构完成作业会简单轻松很多,那么代写机构的水平,要怎么选择才会比较高?
1、代写机构正规专业
不论是在什么情况下,选择正规合法经营的机构肯定是首要的操作,这也是为了避免自己在找机构的时候,出现上当受骗的现象,造成自己的经济出现损失,带来的影响还是非常大的,所以需要注意很多细节才可以,所以在这样的情况下,代写机构的选择,也要选择在经营方面属于正规合法的类型,这样才可以保证服务进行的时候,不会出现各种问题,也可以减少损失的出现,而且正规合法也是代写机构的合格基础。
2、代写机构编程能力
作业的难度相信很多人都很熟悉,特别是对于AI深度学习或者是人工神经网络这种算法来说,因为要对SVM、Design Tree、线性回归以及编程有很高的要求,可以说作业的完成要求非常高,因此才会带动代写机构的发展,找专业的代写机构,一般都是会有专业的人员帮忙进行作业的完成,因为这类型的作业对专业要求比较高,因此代写机构也要具备专业能力才可以,否则很容易导致作业的完成出现问题,出现低分的评价。
3、代写机构收费情况
现在有非常多的留学生,都很在意作业的完成度,为了保证作业可以顺利的被完成,要进行的相关操作可是非常多的,代写机构也是因为如此才会延伸出来的,在现在发展也很迅速,现在选择代写机构的时候,一定要重视收费情况的合理性,因为代写作业还是比较费精力的,而且对于专业能力要求也高,所以价格方面一般会收取几千元至万元左右的价格,但是比较简单的也只需要几百元价格。
4、代写机构完成速度
大部分人都很在意代写机构的专业能力,也会很关心要具备什么能力,才可以展现出稳定的代写能力,其实专业的代写机构,对于作业完成度、作业完成时间、作业专业性等方面,都是要有一定的能力的,特别是在完成的时间上,一定要做到可以根据客户规定的时间内完成的操作,才可以作为合格专业的代写机构存在,大众在选择的时候,也可以重视完成时间这一点来。
现在找专业的CS代写机构帮忙完成作业的代写,完全不是奇怪的事情了,而且专业性越强的作业,需要代写机构帮忙的几率就会越高,代写就发展很好,需求量还是非常高的,这也可以很好的说明了,这个专业的难度以及专业性要求,才可以增加代写机构的存在。
f(x) = 1
(2π)p2 jΣj1 2 exp-1 2(x – µ)tΣ-1(x – µ)
- (a) Write an R function that uses the EM algorithm to find parameters which maximise the likelihood (or minimise the negative
log-likelihood) for a sample of size n from p(x), for a given choice
of K. The function prototype should be
em.norm(x,means,covariances,mix.prop)
where x is an n × p matrix of data, means, covariances, and
mix.prop are the initial values for the K mean vectors, covariance matrices and mixing proportions. Consider including arguments, with sensible defaults, for the convergence criterion and
the maximum number of iterations.
(b) This question will use the first two columns of the object synth.te
in the MASS library:
x <- synth.te[,-3]
For K = 2; 3; 4; 5; 6, use your function to compute the maximum
likelihood estimates for the finite mixture of normal distributions,
for these data. Select initial parameters either randomly, or by
selecting from a plot of the data.
i. Construct a table that reports, for each choice of K, the
maximised likelihood, and the AIC.
ii. On the basis of this table, which choice of K provides the
best density estimate? For this choice, construct a contour
plot of the estimated density, along with the data.
iii. Briefly discuss any problems you anticipate using the EM
algorithm for computing a mixture model with more components, or in higher dimensions
#b.r
install.packages("MASS")
library(MASS)
install.packages("EMCluster")
library(EMCluster)
Y=synth.te[,c(1:2)]
plot(Y[,1],Y[,2])#绘制Y的变量相关图
#K=2时 根据上图,当将样本点分成两个簇的时候,可以预估均值迭代初始值为c(-0.5,0.3),c(0.4,0.5)
# Create starting values
mustart = rbind(c(-0.5,0.3),c(0.4,0.5)) # must be at least slightly different
covstart = list(cov(Y), cov(Y))
probs = c(.01, .99)
随时关注您喜欢的主题
qplot(x=xs, y=ys, data=Y)
ggplot(aes(x=xs, y=ys), data=Y) +
geom_point(aes(color=factor(test$cluster)))
em.aic(x=Y,emobj=list(pi = ret$pi, Mu = ret$Mu, LTSigma = ret$LTSigma))#计算结果的AIC
#K=3时
probs = c(.1, .2, .7)
mustart = rbind(c(-0.7,0.3),c(-0.3,0.8),c(0.4,0.5)) # must be at least slightly different
- Consider a two-class bivariate classification problem, with equal prior
probabilities and class conditional densities given by
f(x; yjCi) = 4θi2xy exp -θi(x2 + y2) x; y > 0
and θi > 0 for i = 1; 2. Note that this joint density is the product of
Rayleigh distributions.
(a) Write an R function that generates a random sample of size n
from class C1 and a random sample of size n for class C2. The
function should return both the feature vectors and the class
indicator. A function for generating Rayleigh distributed random
variables is available1.
(b) Obtain an expression for the decision boundary for minimum error. Suppose we are interested in the situation where the decision
boundary for minimum error intersects with the midpoint of the
line connecting the sample mean vectors. Derive an expression
for θ1 and θ2 to satisfy this situation.
(c) Derive an expression in terms of θ1 and θ2 for the Bayes error rate.
Now, suppose θ2 = 1 and θ1 > θ2. Use the golden ratio search
algorithm developed in question 4 of project 1, to determine the
value of θ1 that gives a Bayes error rate of 15%. The solution
occurs in the interval [3; 10]. (Hint: The target function does
not have to be differentiable at the minimum for the golden ratio
search to work.)
(d) Write down a discriminant function for each class, treating th
parameter θi as unknown.
(e) Let θ1 = 4 and θ2 = 2. Construct a plot of the unconditional density, f(x; y) = p(C1)f(x; yjC1)+p(C2)f(x; yjC2), for the specified
parameter values. Obtain a sample of 50 observations from each
class. Add these data and the Bayes optimal decision boundary
to the plot.
(f) Derive the maximum likelihood estimators for the parameters of
each class, given a sample of size n from each class.
(g) Write two R functions, the first for computing the maximum
likelihood estimates in (f) from a set of data generated by the
function in (a), and the second for evaluating the discriminant
function for each class, using the maximum likelihood estimates
(the estimative discriminant function). Compute the discriminant scores for the data generated in (e) and estimate the error
rate of this classifier on this training data.
(h) Obtain a training sample of size n = 200 and a test sample of
size n = 10000, using the parameter values in part (e). Retain
these training and test samples for use in Questions 3 and 4.
Using these data sets, compute the training and test set error
rates for
i. the estimative version of the true model, using the functions
in part (g),
ii. Linear discriminant analysis,
iii. Quadratic discriminant analysis.
Provide a table of these error rates for the different models. Comment on the results.
a)
#rrayleigh.r
rrayleigh=function(n,theta){
u=runif(n,0,1)
f=sqrt(-2*log(u))/sqrt(2*theta)
return(f)
c)
#theta.r
theta= function(x){
y=(x[1]^2+x[2]^2)/(2-x[1]^2-x[2]^2)
d=c+(c-a)*rat;
}
}
1/2*(a+b)
}
golden(theta)#Use the golden ratio search algorithm to determine the value of theta 1
d)
install.packages(VGAM)
library(VGAM)
#f.r
f = function(x,theta1,theta2){
1/2*drayleigh(x[1],theta1)*drayleigh(x[2],theta1)-1/2*drayleigh(x[1],theta2)*drayleigh(x[2],theta
e)
#e.r
f = function(x,theta1=4,theta2=2){
1/2*drayleigh(x[1],theta1)*drayleigh(x[2],theta1)-1/2*drayleigh(x[1],theta2)*drayleigh(x[2],theta2) }#discriminant function
p = function(x,theta1=4,theta2=2){
1/2*drayleigh(x[1],theta1)*drayleigh(x[2],theta1)+1/2*drayleigh(x[1],theta2)*drayleigh(x[2],theta
关于分析师
LE PHUONG
在此对LE PHUONG对本文所作的贡献表示诚挚感谢,她在山东大学完成了计算机科学与技术专业的硕士学位,专注数据分析、数据可视化、数据采集等。擅长Python、SQL、C/C++、HTML、CSS、VSCode、Linux、Jupyter Notebook。