Support Directional Shifting Vector: A Direction Based Machine Learning Classifier

Md. Kowsher, Imran Hossen, Anik Tahabilder, Nusrat Jahan Prottasha, Kaiser Habib, Zafril Rizal M. Azmi


Machine learning models have been very popular nowadays for providing rigorous solutions to complicated real-life problems. There are three main domains named supervised, unsupervised, and reinforcement. Supervised learning mainly deals with regression and classification. There exist several types of classification algorithms, and these are based on various bases. The classification performance varies based on the dataset velocity and the algorithm selection. In this article, we have focused on developing a model of angular nature that performs supervised classification. Here, we have used two shifting vectors named Support Direction Vector (SDV) and Support Origin Vector (SOV) to form a linear function. These vectors form a linear function to measure cosine-angle with both the target class data and the non-target class data. Considering target data points, the linear function takes such a position that minimizes its angle with target class data and maximizes its angle with non-target class data. The positional error of the linear function has been modelled as a loss function which is iteratively optimized using the gradient descent algorithm. In order to justify the acceptability of this method, we have implemented this model on three different standard datasets. The model showed comparable accuracy with the existing standard supervised classification algorithm.


Doi: 10.28991/esj-2021-01306

Full Text: PDF


Supervised Machine Learning; Classification; Cosine Similarity; Directional Vectors; Angle Measurement.


Alkhateeb, Jawad Hasan. "An Effective Deep Learning Approach for Improving Off-Line Arabic Handwritten Character Recognition." International Journal of Software Engineering and Computer Systems 6, no. 2 (2020): 53-61.

Aseri, Nur Azieta Mohamad, Mohd Arfian Ismail, Abdul Sahli Fakharudin, and Ashraf Osman Ibrahim. “Review of the Meta-Heuristic Algorithms for Fuzzy Modeling in the Classification Problem.” International Journal of Advanced Trends in Computer Science and Engineering 9, no. 1.4 (September 15, 2020): 387–400. doi:10.30534/ijatcse/2020/5691.42020.

Blei, David M., Andrew Y. Ng, and M. Jordan. "Latent Dirichlet Allocation Michael I." Jordan. Journal of Machine Learning Research 3 (2003).

Bojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov. “Enriching Word Vectors with Subword Information.” Transactions of the Association for Computational Linguistics 5 (December 2017): 135–146. doi:10.1162/tacl_a_00051.

Breiman, Leo. "Random forests." Machine learning 45, no. 1 (October 2001): 5-32. doi:10.1023/A:1010933404324.

Cortes, Corinna, and Vladimir Vapnik. “Support-Vector Networks.” Machine Learning 20, no. 3 (September 1995): 273–297. doi:10.1007/bf00994018.

Cunningham, Padraig, and Sarah Jane Delany. "k-Nearest neighbour classifiers: (with Python examples)." arXiv preprint: 2004.04523 (2020).

Fisher, R. A. “The Use of Multiple Measurements in Taxonomic Problems.” Annals of Eugenics 7, no. 2 (September 1936): 179–188. doi:10.1111/j.1469-1809.1936.tb02137.x.

Freund, Yoav, and Robert E. Schapire. “A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting.” Computational Learning Theory (1995): 23–37. doi:10.1007/3-540-59119-2_166.

Friedman, Nir, Dan Geiger, and Moises Goldszmidt. "Bayesian network classifiers." Machine learning 29, no. 2 (1997): 131-163. doi:10.1023/a:1007465528199.

Sparck Jones, Karen. “A Statistical Interpretation of Term Specificity and Its Application in Retrieval.” Journal of Documentation 28, no. 1 (January 1972): 11–21. doi:10.1108/eb026526.

Liu, Huawen, Shichao Zhang, Jianming Zhao, Xiangfu Zhao, and Yuchang Mo. “A New Classification Algorithm Using Mutual Nearest Neighbors.” Ninth International Conference on Grid and Cloud Computing (November 2010): 52–57. doi:10.1109/gcc.2010.23.

López-Cruz, Pedro L., Concha Bielza, and Pedro Larrañaga. “Directional Naive Bayes Classifiers.” Pattern Analysis and Applications 18, no. 2 (July 4, 2013): 225–246. doi:10.1007/s10044-013-0340-z.

Luhn, H. P. “A Statistical Approach to Mechanized Encoding and Searching of Literary Information.” IBM Journal of Research and Development 1, no. 4 (October 1957): 309–317. doi:10.1147/rd.14.0309.

Maas, Andrew, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. "Learning word vectors for sentiment analysis." In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, (2011): 142-150.

McAuley, Julian John, and Jure Leskovec. “From Amateurs to Connoisseurs.” Proceedings of the 22nd International Conference on World Wide Web - WWW ’13 (2013). doi:10.1145/2488388.2488466.

Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. “Efficient estimation of word representations in vector space.” arXiv Paper. 1301.3781 (2013).

Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. "Distributed representations of words and phrases and their compositionality." In Advances in neural information processing systems, (2013): 3111-3119.

Pernes, Diogo, Kelwin Fernandes, and Jaime Cardoso. “Directional Support Vector Machines.” Applied Sciences 9, no. 4 (February 19, 2019): 725. doi:10.3390/app9040725.

Quinlan, J. Ross. "Induction of decision trees." Machine learning 1, no. 1 (1986): 81-106. doi:10.1023/A:1022643204877.

Rosenblatt, Frank. “Principles of Neurodynamics. Perceptrons and the Theory of Brain Mechanisms.” Cornell Aeronautical Lab Inc. Buffalo NY, (1961).

UCI. 2016. Pima Indians Diabetes Database | Kaggle. UCI Machine Learning. Available online: (accessed on 30 June 2021).

Yeruva, Sagar, M. Sharada Varalakshmi, B. Pavan Gowtham, Y. Hari Chandana, and PESN. Krishna Prasad. “Identification of Sickle Cell Anemia Using Deep Neural Networks.” Emerging Science Journal 5, no. 2 (April 1, 2021): 200–210. doi:10.28991/esj-2021-01270.

Full Text: PDF

DOI: 10.28991/esj-2021-01306


  • There are currently no refbacks.

Copyright (c) 2021 Anik Tahabilder