Attacks on machine learning systems such as malware detectors and recommendation systems are becoming a major threat. Data poisoning attacks are the primary method used; they inject a small amount of poisoning points into a training set of the machine learning model, aiming to degrade the overall accuracy of the model. Targeted data poisoning is a variant of data poisoning attacks that injects malicious data into the model to cause a misclassification of the targeted input data while keeping almost the same overall accuracy as the unpoisoned model. Sasaki et al. first applied targeted data poisoning to malware detection and proposed an algorithm to generate poisoning points to misclassify targeted malware as goodware. Their algorithm achieved 85 % an attack success rate by adding 15 % poisoning points for malware dataset with continuous variables while restricting the increase in the test error on nontargeted data to at most 10 %. In this paper, we consider common defensive methods called data sanitization defenses, against targeted data poisoning and propose a defense-aware attack algorithm. Moreover, we propose a stronger targeted poisoning algorithm based on the theoretical analysis of the optimal attack strategy proposed by Steinhardt et al. The computational cost of our algorithm is much less than that of existing targeted poisoning algorithms. As a result, our new algorithm achieves a 91 % attack success rate for malware dataset with continuous variables by adding the same 15 % poisoning points and is approximately 103 times faster in terms of the computational time needed to generate poison data than Sasaki’s algorithm.