Wednesday, November 20, 2024

Artificial Intelligence System for Value Added Tax Collection via Self Organizing Map (SOM)- Juniper Publishers

 

Forensic Sciences & Criminal Investigation - Juniper Publishers

Abstract

Findings: Based on our experiments, our approach is an effective instrument for hot spots and heat map exploration since it employs visualizations techniques that are easy to understand. In the ANN-SOM similarity Heat map we observe that VAT vendors or entities, with similar VAT return characteristics, are grouped in the same area or node. Generally, in business, users are more interested in “abnormal clusters” or hot spots. That is, clusters of VAT vendors who have more suspicious behavior than normal nodes or clusters. However, when interpreting the ANN-SOM Heat map the abnormal clusters are those that have a smaller number of entities. That is, these nodes are composed of suspicious VAT vendors. Such VAT vendors require detailed human verification by VAT audit specialists. The results show that detection of suspicions VAT declarations is a very challenging task as VAT declarations datasets are extremely unbalanced in nature. Furthermore, the tax fraud domain is full of unlabeled data, which in turn makes it difficult to use supervised learning approaches. VAT fraud or suspicious behavior can be differentiated by observing VAT return form attributes such as VAT Liability, Exempt supplies, Diesel Refund and Input VAT on Capital Goods purchased.

Research, Practical & Social implications: The article highlights the use of SOMs in exploring hot spots in a large real-world data set from the taxation domain. The approach is an effective tool for hot spots exploration since it offers visualizations that are easy to understand for tax administration users. Tax auditors can select abnormal clusters for further investigation and exploration. The framework and method are designed with the objective of assisting with the VAT audit case selection. Furthermore, we envisaged that the model would improve the effectiveness and efficiency of the revenue collection agencies in identifying anomalies on VAT returns filed by the taxpayers. Moreover, Tax authorities may be able to select the most appropriate unsupervised learning technique from this work having considered other alternatives, their operational requirements and business context. Thus, leading to a multitude of available Artificial Intelligence aided VAT fraud detection algorithms and approaches. Additionally, the techniques proposed in this paper will help tax administrations with precise case selection using an empirical and data-driven approach, which does not depend upon labelled historic VAT datasets. Furthermore, we envisage the approach will result in a high hit ratio on suspicious VAT returns, and thus improve tax compliance due to the likelihood of detection.

Originality/value: The value of the study is that in as much as this paper’s focal point is on VAT fraud detection, we are confident that the present model may just as well be applicable to other tax types, like Company Income Tax and Personal Income Tax for instance. This research outcome shows the potential of artificial intelligence techniques in the realm of VAT fraud and criminal investigation. Furthermore, this review put forward high-level and detailed classification frameworks on VAT fraud detection. Additionally, the framework proposed herein presents tax auditors with a systemic case selection guide of suspicious VAT returns. Furthermore, it is crucial to have an all-encompassing view on detecting tax fraud in general and VAT fraud. This is to broaden the understanding and knowledge of the VAT fraud phenomenon among researchers.

Keywords: Self-organizing map; Cluster analysis, Anomaly detection; VAT fraud detection; Artificial intelligence; Robotics process automation; Algorithms and Machine learning; Criminal investigation

Introduction

In the Information Systems field, IS or IT business strategies and modelling could be referred to as an act or science of initiating a transaction or exchange through a predetermined series of actions (e.g., organizational management, planning, or technology processes). Research on digital platforms (or multisided markets) originated in IS economics and has been pronounced in the strategy field since early 2000s [1,2]. The adoption of Internet and mobile phone services have enabled industries to introduce a platform enabling business models commonly referred to as disrupting industry structures [3,4]. For instance, in transportation, lodging and meal delivery sectors (such as Uber, Airbnb and Mr. Delivery).

AI digital platform (e-platform) is an ICT value creation which facilitates transactions between several groups of users including buyers and sellers [2]. For example, content and search engine optimization, social media marketing and optimization are augmenting consumer buying powers as more and more consumers are voicing their opinions. Furthermore, customers express their views about the industry, their brands and related product attributes. Ineffective delivery of products and services regarding customer requirements could impact the corporate brand, image, loyalty, and values. This could lead to customer discontent thereby causing disengagement with products and brands through eWom [5]. Overall, digital technologies present considerable opportunities for enterprise leaders to rethink their business to create better experience for customers, employees, partners, and as well lower cost of services [6].

In this research, we explore how an Artificial Intelligence digital platform framework could be employed to explore value added tax fraud prediction in the revenue service sector. Notably, AI digital platforms appear to influence effective and productive revenue collection strategic decisions. The recent work and trends in the field of AI digital platforms varies namely, Apple Siri enabled smart mobile searches, the web search and capture of keywords, google duplex for hair-grooming appointments, restaurant reservations, voice tone and language patterns duplex are hardly distinguishable with human voice [7]. Recently, Amazon entered a partnership with Marriott International Inc. wherein Amazon Flywheel and Amazon Alexa Voice enabled platforms perform the task of assisting hotel guests from room servicing to housekeeping [8]. In spite of these developments, we suggest that very little research has existed around the use of AI in computer information systems that explores digital platforms designed to aid the efforts of revenue collection and the identification of tax fraud and evasion.

VAT fraud and as well as VAT criminal investigation can be explained as a deliberate misrepresentation of information in VAT returns or declarations to decrease the amount of the tax liability [9]. VAT fraud is a major problem for tax administrations across the world. It is carried out by criminals and organized crime networks. VAT fraud can occur in many sectors including electronics, minerals, cars, and carbon permits. The most attractive goods for fraudsters have been those of high value and low volume such as mobile phones or computer chips, which generate huge amounts of VAT in the lowest number of transactions and in the shortest possible time [10]. At the heart of the VAT system is the credit mechanism, with tax charged by a seller available as a credit against their liability on their own sales and, if more than the output VAT due, refunded to them. According to Keen & Smith [11], this creates opportunities for several types of fraud characteristic of the VAT namely: False Claims for Credit or Refund; Zero-rating of Exports and Misclassification of Commodities; Credit Claimed for VAT on Purchases that are not Creditable; Bogus Traders; Under-reported Sales; Failure to Register; Tax Collected but not Remitted.

The development of AI digital platform (e-platform) for VAT fraud detection is required to ensure that large amounts of revenue that could be used by the government for the much-needed socio-economic public services such as hospitals, schools and road infrastructure are generated. Artificial neural networks (ANN), when trained properly can work like a human brain. They learn by example, like people and are known to be exceptionally good classifiers. Furthermore, the neural network is preferred in this study due to its ability to solve classification problems [12]. Machine Learning algorithms are very likely to produce faulty classifiers when they are trained with imbalanced datasets. Fraud datasets are characterized by imbalanced datasets. An imbalanced dataset is one where the number of observations belonging to one class is significantly higher than those belonging to the other classes. Other algorithms tend to show a bias for the majority class, treating the minority class as a noise in the dataset. In many standard classifier algorithms, such as Naive Bayes, Logistic Regression, and Decision Trees, there is a likelihood of the wrong classification of the minority class. ANN are well suited to imbalanced datasets [13]. Hence, this research proposes a Self- Organizing Map (SOM) Neural Network algorithm to detect VAT fraud.

A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN). The SOM is trained using unsupervised learning to produce a low dimensional map. It is a discretized representation of the input space of the training samples, called a map. Self-organizing maps differ from other AI algorithms in that they use competitive learning instead of error-correction learning [14]. In a sense, it uses a neighborhood function to preserve the topological properties of the input space [15].

Background

The background for this research is multifold, that is, to create a VAT fraud detection AI framework as well as the application of a Self- Organizing Map (SOM) algorithm to detect VAT fraud. This paper focuses on VAT fraud detection. The fraud detection arena is characterized by extremely large amounts of unlabeled structured and unstructured data. Unsupervised machine learning algorithms are well suited to unlabeled datasets. Hence, we herein propose an unsupervised machine learning approach for detecting VAT fraud. We begin by describing the SOM algorithm. Thereafter, we discuss data collection and data preparation [9]. Additionally, we elaborate on issues relating to VAT variables feature selection, followed by an exploratory data analysis. Furthermore, we explain the machine learning technique and statistical algorithm employed in this study. Finally, we present results from actual digital platform-based experiments conducted on taxpayer level VAT dataset.

Justification

The traditional way for improving tax fraud detection by tax audit is costly and limited in terms of scope given the vast population of taxpayers and considering the limited capacity of tax auditors. Auditing tax returns is a slow and costly process that is very prone to errors. Conducting tax audits for example, involves costs to the tax administration as well as to the taxpayer. Furthermore, the field of anomaly and fraud detection is characterized by unlabeled historical data. To this end, the writers suggest the use of unsupervised machine learning (ML) algorithms which are well suited to unlabeled datasets. There is little research comparing the effectiveness of various unsupervised learning approaches in the VAT fraud realm. The detection of tax fraud can be constructively approached by techniques based on supervised ML techniques. However, those methods require enormous training datasets containing data instances corresponding to both verified tax fraud cases and compliant taxpayers. Many previous studies reviewed allude to the scarcity of labelled datasets in both developed and developing countries. The second problem with supervised ML approaches could be that only a small number of frauds are identified by tax administrations that are recorded in the training dataset. Thus, recorded fraud cases are not representative of the entire population. Therefore, a trained supervised models will be biased; however, it will have a high fraud hit ratio, but a low recall.

Consequently, the lack of the availability of labeled tax fraud is usually dealt with by unsupervised ML methods based on anomaly detection algorithms. Therefore, unsupervised methods are suitable as a decision support or selection tool in tax fraud systems. Therefore, unsupervised algorithms enable better and faster prioritization of tax audit cases, thus improving the effectiveness and efficiency of tax collection. Secondly, tax fraud cases based on accurate unsupervised learning may lead to a more efficient use of resources.

In the study various approaches are considered including Principal Component Analysis (PCA), k-Nearest Neighbors (kNN), Self-Organizing Map (SOM) and K-means, as well as deep learning methods including Convolutional Neural Networks (CNN) and Stacked Sparse AutoEncoder (SSAE). This paper can serve as a guideline to provide useful clues for analysts who are going to select ML methods for tax fraud detection systems as well as for researchers interested in developing more reliable and efficient methods for fraud detection. In this study, the VAT datasets obtained are from the South African tax administration. In particular, a dataset of the mining industry is chosen. This is because the South African diesel rebate scheme is very prone to abuse and VAT fraud. Additionally, in South Africa the mining sector is very important. It employs more than 464,000 people and accounts for 8.2% of GDP [16].

Objective of the Work

The objective of the work is to determine what type of AI technique or framework could be applied to improve tax collection. In particular, the present study explores the use of AI in VAT fraud detection. The main purpose of the study is to determine how corporate VAT fraud could be detected in real time. Corporates and private businesses, primarily use artificial intelligence to influence business models, sales processes, customer segmentation, strategy formulation as well as to understand customer behaviour, in order to increase revenue [4]. There is substantial research on the influence of AI on business strategies with the objective of increasing revenue [2]. However, there is limited research on the use of AI in information systems research to assist in the efforts of revenue collection and VAT fraud detection.

Detection of suspicions VAT declarations is a very challenging task as VAT declarations datasets are extremely unbalanced in nature. Furthermore, the tax fraud domain is full of unlabeled data, which in turn makes it difficult to use supervised learning approaches. In this research paper, we proposed an unsupervised learning approach. Regardless, it is crucial to have an all-encompassing review on detecting tax fraud in general.

Unsupervised algorithms are well suited to unlabeled historical datasets, common in the fraud detection or classification arena. The authors conduct experiments using an unsupervised Neural Network algorithm to classify suspicious Value Added Tax declarations. This algorithm can assist in the efforts of tax audits made by tax administrations. Consequently, it is envisaged that the chances of detecting fraudulent VAT declarations will be enhanced using AI techniques, proposed in this paper.

Literature Review

In the age of big data, detecting fraudulent activities within tax returns is analogous to finding a needle in a haystack. Anomaly detection approaches and ML techniques that focus on interdependencies between different data attributes, have been increasingly used to analyze relations and connectivity patterns in tax returns to identify unusual patterns [17]. In the surmise of Molsa [18] Artificial Intelligence & automation are poised to reshape the digital platform function. Phua, Alahakoon & Lee [19] in their paper, tabulate, compare and summarize fraud detection methods and techniques that have been published in academic and industrial research during the past 10 years. This is done in the business context of harvesting the data to achieve higher cost savings. Phua et al. [19] presents a methodology and techniques used for fraud detection together with their inherent problems. In their research, they juxtaposed four major methods commonly used for applying a machine learning algorithm. Firstly, supervised learning on labelled data. Secondly, hybrid approach with labeled data. Thirdly, semi-supervised approach with non-fraud data, and lastly, unsupervised approach with un-labeled data. Meanwhile, Shao et al. [20] describe the building of a fraud detection model for the Qingdao customs port of China. The model is used to provide decision rules to the Chinese custom officials for inspection of goods based on historical transaction data. The objective is to improve the hit rate. The model is appropriately named ‘Intelligent Eyes’ and has been successfully implemented with high predictive accuracy [20].

Tax administration agencies must use their limited resources very judiciously whilst achieving maximal taxpayer compliance albeit at the lowest cost of revenue collection. Whilst, at the same time, adhering to lower levels of taxpayer intrusion. The Quantitative Analytics Unit of the Securities Regulation Institute in Coronado, California, USA, developed a revolutionary new statistical-based algorithm application called “NEAT,” which stands for the “National Examination Analytics Tool” [21]. With NEAT, securities examiners can access and systematically analyze massive amounts of trading data from firms in a fraction of the time it has in the previous years. In one recent examination, NEAT was used to scrutinize in 36 hours exactly 17 million transactions executed by one investment adviser. Among its many benefits, NEAT can search for evidence of probable insider trading by comparing historical data of significant corporate activity like mergers and acquisitions against the companies in which a registrant is trading. They then use this information to analyze how the registrant traded at the time of those notable events. NEAT can review all the securities the registrant traded and quickly identify the trading patterns of the registrant for suspicious activities [21].

Theoretical Background

Artificial intelligence (AI) digital platform

An AI e-platform uses artificial intelligence techniques to make automated decisions based on data collection, data analysis and data scrutiny. An AI digital platform serves as a Computer Information Systems platform that showcases economic trends that may impact system automation efforts. AI techniques like ML use customer data, to learn how to best interact with customers thereby providing insights that could serve those customers with tailored messages at the right time without intervention from external factors to guarantee effective, efficient, and impactful product development and communication. In the current circumstances, this study endeavored to place AI digital platform development in the context of systemic developments which could be well-thought-out as digitalization of industries IT resources [22].

Additionally, an AI e-platform performs repetitive, routine, and tactical tasks that require less human intervention. It use-cases may include data analysis; media buying; automated decision making; natural language processing; generation of content; real-time personalized or tailored messaging [22]. Accordingly, AI digital platforms hold a vital role in helping managers to understand ML algorithms like k-nearest neighbor, Bayesian Learning and Forgetting, Self- Organizing Maps, Artificial Neural Network Self-Organizing Maps. These forms of algorithms help to gain a comprehensible understanding of how amenable and responsive a customer is to a specific product offering effort. Therefore, AI e-platform frameworks are required to process expansive and extensive data sets that can potentially unveil hidden knowledge and insights about products and their customers. Thus, enabling organizations to derive significant revenue growth, whilst strengthening customer relationships [23]. There is significant research on the impact of AI on business processes in information systems to increase revenue, however more research is needed to explore its potential in aiding revenue collection by tax administrations [24]. Consequently, in this current study we use unsupervised models to detect Value Added Tax fraud in order to improve tax compliance, and thus enhance revenue collection.

Social behaviours on tax fraud and compliance

Earlier on, we explained that an AI e-platform is required to understand customer needs to create appropriate and personalized messages and product offering. In the same vein, a tax administration’s understanding of its taxpayers is key to effective tax administration and revenue collection. The taxpayers’ attitude on compliance may be influenced by many factors, which eventually influence a taxpayer’s behavior. Those factors which influence tax compliance behavior differ from one country to another, and from one individual to another [25]. Namely, taxpayer’s perceptions of the tax system and Tax Authority [26]; peer attitude, norms and values; a taxpayers ’s understanding of the tax system or tax laws [27]; motivation such as rewards [28]; punishment such as penalties [29]; cost of compliance [30]; enforcement efforts such as audit; probability of detection; differences between cultures; perceived behavioral control [31]; ethics or morality of the taxpayer and tax collector; equity of the tax systems; demographic factors such as sex, age, education and size of income and use of informants [32].

Therefore, tax fraud detection, enforcement and the behavior of others affect taxpayer compliance [33]. The IRS Commissioner Charles Rossotti noted that when the number of audits is reduced, honesty suffers as fears of policing decline.

Additionally, if taxpayers begin to believe that others are cheating, then the temptations to shave their own tax burdens may become irresistible. Commissioner Rossotti’s observations recognize that tax fraud detection and enforcement affect social behaviors, and that these behaviors can, in turn, affect taxpayers’ compliance decisions [33]. Accordingly, the probability that a taxpayer will escape their tax obligations increases when the taxpayer suspects that his associates, colleagues, and acquaintances are evading taxes [34].

Vat Fraud Detection AI e-Platform Framework

(Figure 1)

Overview of the framework

The VAT Fraud Detection AI e-platform framework employs a Self-Organizing Map (SOM) neural network. The framework is designed with the objective of assisting with the VAT audit case selection. Furthermore, we envisage that the model should improve the effectiveness and efficiency of the revenue collection agencies in identifying anomalies on VAT returns filed by the taxpayers. The framework hereunder classifies and segregate taxpayers into clusters or categories that have the greatest likelihood of committing fraud. Thus, the framework selects taxpayers for audit on the probability that they have committed fraud. The VAT Fraud Detection AI Framework proposed herein, is an amalgam of a typical industry standard machine-learning life cycle and tax authorities’ VAT auditors’ standard guide.

Flow of the framework

Task 1 – Extract VAT return data: According to the industry standard machine learning life cycle, this task is conducted under the data gathering phase of the life cycle. This step’s goal is involved with the collection of data and the integration of data obtained from various sources such as files, database, the internet, or mobile devices. It is one of the most important steps of the life cycle. The quantity and quality of the collected data will determine the efficiency of the output. The more data we collect, the more accurate will be the classification or prediction.

Task 2 – Aggregate data: After collecting the data, we need to prepare it for further steps. In the ML life cycle this task is completed under the Data preparation phase. Data preparation is a step where we put our data into a suitable database or files and prepare it to use in our machine learning training. In our framework, for each VAT dealer, we aggregate all numerical continuous variable obtained from the return. In this study the summary values. over a period of 6 years, and are calculated for each individual VAT vendor. This effectively allows the algorithm to have a longer-term view of the vendor behaviour as opposed to monthly or yearly scrutiny. During this task will also conduct data pre-processing and exploratory data analysis.

Task 3 – Normalize data: This task is normally undertaken under the Data preparation phase of the Machine Learning Lifecycle. During data preparation we use a technique called normalization or standardization, to rescale our input and output variables prior to training a neural network model. The purpose is to normalize the data to obtain a mean close to zero. The review of the literature reveals that normalization could improve performance of the model [35]. Normalizing the data generally speeds up learning and leads to faster convergence. Accordingly, mapping data to around zero produces a much faster training speed than mapping them to the intervals far away from zero or using unnormalized raw data.

Task 4 – ANN-SOM Algorithm: Formally this stage is about selecting an appropriate Machine Learning Algorithm. This is an iterative process. During this study, we identified multiple machine learning algorithms applicable to our data and VAT fraud detection challenges. Therefore, as mentioned previously we shall use an unsupervised learning approach which is appropriate for unlabeled data. The algorithms we evaluated were K-means and Self Organizing Maps (SOM). According to Riveros et al. [36] the model trained with SOM outperformed the model trained with K-means. In their study they found that the SOM improved detection of patients having vertebral problems [36]. Likewise, after a few iterative processes, comparing the SOM and K-means performances, we chose the ANN-SOM algorithm.

Task 5 – Train and Test model: This stage is concerned with creating a model from the data given to it. At this stage we split the dataset into training and test datasets: 20% for testing and 80% for training. Herein, the training process is unsupervised. The remaining dataset is then used to evaluate the model. These two steps are repeated a number of times in order to improve the performance of the model [36].

Task 6 – Optimize model: A model’s first results are not its last. The objective of the optimization or tuning to improve performance of the model. Tuning a model involves changing hyper parameters such as learning rate or optimizer [37]. The result for tuning and improving the model should be repeatability, efficiency and to reduce the training time. Someone should be able to reproduce the steps one has taken to improve performance.

Task 7 – Deploy model: The aim of this stage is the proper functionality of the model after deployment. The models should be deployed in such a way that they can be used for inference as well as be updated regularly [37].

Task 8 – VAT audit Case selection: The cohort of VAT vendors with return declarations that have been identified by the SOM as suspicious land up in the “funnel” for further scrutiny. This step is comprised of human verification. This audit is merely a general audit of cases selected for further scrutiny. This contrasts with an Investigative Audit, which is concerned with the auditing of cases by a specialist auditor.

Task 9 –Investigative audit, criminal investigation, and enforcement: Investigative audits are different from other tax audits in that a centralized specialist team conducts them. Task 9 is undertaken based on the results obtained from the previous audits conducted in Task 8 above, where audit officers have identified evidence of serious fraud.

Task 10 - Tax compliance: The tax compliance task is involved with the scrutiny of compliance related attributes like filing returns on time, timely payments, accurate completion of returns and timely registration with the tax authority, among others.

Task 11 – Voluntary compliance: The aim of the VAT fraud detection AI framework is to increase voluntary compliance. The level of audit activity and frequency of audit will be dictated by the availability of staff resources. The convenience of the AI framework suggested herein, is that it will ensure that the available staff resources are deployed judiciously with the twin objectives of maximizing both revenue collection and voluntary compliance by VAT dealers.

The “filter” or “funnel” described in the task 8, symbolizes the audit process, which involves a detailed human verification and validation of lading. This in turn assists in the independent verification financial records such as sales invoices, purchase invoices, customs documents, and bank cash deposits. However, the scope of the human verification is limited to the subset of taxpayers that have been flagged as anomalies by the SOM algorithm we propose. Once human verification has confirmed the presence of suspicious VAT declarations, such cases are then dealt with in task 9. Task 9 is a depiction of the work performed by investigative audit, criminal investigation, and enforcement teams, on confirmed cases. With this framework we envisage, that the effectiveness and efficiency of this AI assisted compliance framework will enhance detection of suspicious VAT vendors. Consequently, we anticipate that tax compliance will improve as the fear of detection increases (Task 10). Voluntary compliance will be a consequence of an improved, effective, and efficient AI based case selection technique (Task 11).

Material and Methodology

Data collection

We employed a rich data set, that is, the totality of VAT returns covering 6 years from 2013 to 2018. In order to delineate our data collection techniques, we have chosen to concentrate on only one type of industry, that is, mining. We have collected VAT returns for the complete list of registered vendors for six tax years, that is, 2013 to 2018. The firms have been anonymized so that we cannot link them with any publicly available data, however, they have been assigned identifying numbers so that we can follow a firm over time. The data contains detailed information on the line items in the returns, which is the VAT 201 declaration form of the South African tax administration. For instance, from the VAT return we managed to acquire 35 continuous variables. For ethical and confidentiality reasons, we shall not list all 35 variables, but only a subset of variables (Table 1).

Data preparation

According to Peck et al. [38] data preparation is the cleansing and organizing of real-world data, which is known to consume more than 80% of the time of a data scientist’s work. Real-world data or raw data is dirty, full of missing values, duplicates and in some cases incorrect information [38]. Most machine-learning algorithms cannot deal with missing values. Hence, the data needs to be converted and cleansed. In handling missing values, we dropped rows and then applied linear interpolation using mean values. Depending on the importance of the variable or feature the number of the missing values, any one of these solutions can be employed [38].

Consequently, we are fortunate in that we obtained a clean and high-quality dataset. However, the VAT, return dataset we obtained was at monthly level. We subsequently summed up all variables to annual values. The aggregation of all numerical variables of the VAT returns spans a six-year period from in 2013 to 2018. Thereafter the rand value amount was converted into ratios for ease of comparison. Nevertheless, as stated before the details of some of the variables that we used in this study could not be reported herein, due to the confidential nature of the tax audit process. Doing so can increase the potential for reverse engineering of the audit process, which is clearly undesirable and unlawful. However, each VAT ratio is designed from a point of view that a significantly higher or lower ratio value in relation to the rest of the sample or observations could arouse suspicion. In the opinion of Pamela Castellón González & Juan D. Velásquez fraud cases are most likely to occur among the extreme values of variables [39].

Dataset

The dataset consists of 5065 observations with 35 continuous variables. The observations are of Value Added Tax declarations or returns filed with the South African tax administration. However, the structure of the dataset showing sample variables can be seen in Table 1. In addition, we do provide aggregate indicative results that demonstrate the effectiveness of our approach. Thus, the 35 variables or attributes we have selected for this study are: gross income profile, income source, expense profile, source of purchases, tax payable or refundable, sales destination, imports or export purchases, accuracy of the declarations, overdue payments of taxes due to the tax authority, market segments, taxpayer industry, demographics, and the size of the firm.

Exploratory data analysis

In this section we use graphs, visualization, and transformation techniques to explore the VAT dataset (Table 2) in a systematic way. Statisticians call this task exploratory data analysis, or EDA for short. EDA is a repetitive cycle to firstly, give rise to questions about our data. Secondly, during this phase, we look for answers by visualizing and transforming the dataset population and lastly, we use what we have learnt to refine the questions and/or generate new questions. EDA is not a formal process with a strict set of rules [38]. We hope this initial data analysis will provide insight into important characteristics of the data.

Furthermore, we anticipate that EDA can provide guidance in selecting appropriate methods for further analysis. Additionally, we shall use summary statistics to provide information about our dataset. During this stage, we envisage that the summary statistics will tell us something about the values in the dataset. This includes where the average and median lies and whether our data is skewed or not. According to Peck & Devore [38], summary statistics fall into three main categories. That is, measures of location, measures of spread and graphs [38].

The measures of location will tell us where the data is centered. It will also tell us where a trend lies. Therefore, we shall use Mean, Median and Mode. The arithmetic mean, also called the average, is the central value of a discrete set of numbers. Specifically, it is the sum of the values divided by the number of values. The median is the middle of a data set. The mode of the data set tells us which value is the most common. On the other hand, measures of spread will tell us how spread out our data set is. According to Peck & Devore [38], the Range (inclusive of the Interquartile range and the Interdecile range), the Standard deviation, the Variance and the Quartiles are examples of measure of spread. The Range depicts how spread-out, is our data. The Interquartile range will tell us where the middle 50 percent of our data is located. Whilst the Quartiles will illustrate the boundaries of the lowest, middle, and upper quarters of the dataset [38]. A correlation matrix was used to quantify dependencies between 35 continuous variables. For this, a Pearson correlation matrix was calculated for all 35 variables. A correlation value between two variables that has an absolute value greater than 0.7 is considered as high and therefore the variables are highly closely related to each other. The objective of this analysis is to establish whether two variables are correlated to each other, and not that they are necessarily causal. The sign of the actual value, which is either positive or negative, provides information about whether two variables are positively or inversely related to each other [40,41]. For example, the correlation value between sales and input vat is 0.98, meaning that there is a direct positive relationship between the two variables. This is rational, because input vat is charged on all purchases of goods and services, which will later become sales of goods and services by the entity. (Table 2).

The correlation matrix and heat maps generated across the 35 variables are a valuable visual representation of VAT data set trends. While the correlation matrix and the heat maps produce the same conclusion, the heat maps can provide further information about the distribution and localization of correlated variables. Our method of generating heat maps can visualize the correlations between multiple variables, providing a broader analysis than using a correlation matrix.

Summary statistics (Table 3)

Correlation matrix (Table 4)

Correlation plots (Figure 2 -7).

Results and Discussion

Data pre-processing

Normalizing the data as mentioned in Task 3, generally speeds up learning and leads to faster convergence. Accordingly, mapping data to around zero produces a much faster training speed than mapping them to the intervals far away from zero or using un-normalized raw data. Academic researchers [35] point to the importance of data normalization prior to the neural network training to improve the speed of calculations and obtain satisfactory results in nuclear power plant application. In the opinion of various authors statistical normalization techniques enhance the reliability and the performance of the trained model [42].

SOM training algorithm

In training algorithm, the SOM map is trained iteratively by taking training data vectors one by one from a training data vector sequence, finding the Best Matching Unit (BMU) for the selected training data vector on the map and updating the BMU and its neighbors closer toward the data vector. This process of finding the BMU and updating the prototype vectors are repeated until a predefined number of training iterations or epochs is completed. The SOM training progress plot is depicted in figure 8 below.

SOM neighbour distance

The neighbor distance is often referred to as the “U-Matrix”. This visualization is of the distance between each node and its neighbors. Typically viewed with a grayscale palette, areas of low neighbor distance indicate groups of nodes that are similar. Areas with large distances indicate the nodes are much more dissimilar. Furthermore, they indicate natural boundaries between node clusters. Them SOM Neighbor distance plot (Figure 9) [43-49].

Artificial neural network-SOM heat map

The ANN-SOM heat map is the outcome of the Neural Network Self-Organizing map (SOM) algorithm we trained on the VAT dataset, which has 35 continuous numeric variables. The heat map shows the distribution of all variables across the SOM. We stated before that the dataset used in this experiment spans a period of 6 years from 2013 to 2018. The outcome is a grid of 16 Nodes from 5065 observations belonging to the mining industry (Figure 10) [50-55].

Results and Conclusion

As already mentioned, our VAT dataset consists of 35 continuous variables. The continuous variables are made up of variables like Output VAT, Input VAT, Sales, Cost of Sales, VAT Refund, VAT Payable, to name just a few. The sample size consists of 5065 different VAT taxpayers all belonging to the mining industry. In our experiments, the ANN-SOM map size is 4x4. The SOM cluster heat map contains sixteen distinct clusters and the dots on the cluster represent individual taxpayers or entities. In the ANN-SOM Heat map above we observe that VAT vendors or entities, for example with similar VAT return characteristics, are grouped in the same area or node. In business, users are more interested in “abnormal clusters” or hot spots. That is, clusters of VAT vendors who have suspicious behaviour rather than normal nodes or clusters. We use three approaches to identify hot spots. That is, by using the ANN-SOM Heat map. Distance matrix Visualization and domain experts’ feedback based on component plane visualizations. Using distance matrix visualizations, homogenous clusters (low variation) will have shorter neighbor distances (the white regions) compared to high variation clusters (the dark regions) as shown in Figure 9. The value of a component in a node is the mean value of entities. (VAT Vendors) in the node and its neighbors. The average value of entities is determined by the neighborhood function and the final radius used in the final training (Figure 8). The color coding of the map is created based on the minimum and maximum values of the component of the map. In this research paper, we use the grey color map where the maximum value is assigned black, and the minimum value is assigned white.

However, when interpreting the ANN-SOM Heat map, the abnormal clusters are those that have a fewer number of entities. That is, these nodes are composed of suspicious VAT vendors. Such VAT vendors require detailed human verification by VAT audit specialists. Node 4, for example, has the largest number of entities at 4948. The entities clustered in Node 4 are homogeneous in nature, and thus depict VAT entities with normal behaviour. VAT fraud or suspicious behaviour can be differentiated by observing VAT declarations form attributes such as VAT Liability, Exempt supplies, Diesel Refund, and Input VAT on Capital Goods purchased. Detection of suspicions VAT declarations is a very challenging task as VAT declarations dataset are extremely unbalanced in nature. Furthermore, the tax fraud domain is full of unlabeled data, which in turn makes it difficult to use supervised learning approaches. In this research paper, we proposed an unsupervised learning approach. Nevertheless, it is crucial to have an all-encompassing review on detecting VAT fraud. This is to broaden the understanding and knowledge of the VAT fraud phenomenon among researchers and in the government marketing domain. Remarkably supervised learning algorithms have proved to be limited in the arena of VAT fraud detection, since the tax administrations have extremely low to non-existent labelled historic data. This in turn cripple the efficacy of supervised learning approaches.

In as much as this paper’s focal point is on VAT fraud detection, we are confident that the present model may just as well be applicable to other tax types, like Industry Income Tax and Personal Income Tax for instance. This research outcome shows the potential of AI techniques in the realm of VAT fraud. Furthermore, this review put forward high-level and detailed digital classification frameworks on VAT fraud detection. Additionally, the e-platform framework proposed present tax auditors with a systemic case selection guide of suspicious VAT returns. Consequently, combining the two frameworks into a single hybrid approach can improve the success of detecting other VAT fraud schemes. Tax administration may be able to select the most appropriate unsupervised learning technique from this work having considered other alternatives, their operational requirements and business context. Thus, leading to a multitude of available AI aided VAT fraud detection algorithms and approaches. Additionally, the techniques proposed in this paper should help tax administrations with precise case selection using an empirical and data-driven approach, which does not depend upon a labeled historic VAT dataset. Furthermore, we envisage the approach should result in high hit ratio on suspicious VAT returns, and thus improve tax compliance due to the increased likelihood of detection. We have demonstrated the use of ANN-SOM in exploring hot spots in a large real-world Value-Added Tax domain. Based on our experiments, our approach is an effective instrument for hot spots and heat map exploration since it employs visualizations techniques that are easy to understand. In future different profiling or clustering algorithms and sampling techniques can be applied to further improve the performance of the proposed approach. Notwithstanding, compared with supervised approaches, unsupervised methods are less precise. That is, they will not only identify tax fraud cases, but will also indicate taxpayers with irregular and suspicious tax behavior and dishonest taxpayers. Future research studies using hybrid algorithms may produce higher quality research outcomes.


To Know more about Journal of Forensic Sciences & Criminal Investigation
Click here: https://juniperpublishers.com/jfsci/index.php

To Know more about our Juniper Publishers
Click here: https://juniperpublishers.com/index.php



Thursday, October 10, 2024

Juniper Publishers FAQs: Your Guide to Common Publishing Questions and Solutions

 Juniper Publishers FAQ's


Q: What are the Manuscript Guidelines in Open Access Journal Publishers?

A: Manuscript guidelines in open-access journal publishers, including Juniper Publishers, are designed to ensure the quality, consistency, and integrity of the research they publish. Authors are expected to submit manuscripts that reflect their original work or work they were involved in during their tenure. The manuscript should present authentic results, data, and ideas that have not been published elsewhere.

For Juniper Publishers, the guidelines emphasize the importance of originality and authenticity. Authors must ensure their manuscripts contain original and verifiable research, and any use of previously published material must be properly cited. It is also critical that authors do not submit the same manuscript simultaneously to multiple journals, as this can lead to conflicts over publication rights and unnecessary duplication of the peer review process.

 In preparing a manuscript, authors should avoid intellectual property theft and plagiarism, as these are considered serious ethical breaches. If information or data from external sources is included, it should only be done with proper permission from the source owner. These guidelines help maintain the high ethical standards expected in scholarly publishing and ensure that the research published by Juniper Publishers and other open-access journals is of the highest quality.


Q: What is the Peer Review Process at Juniper Publishers?

A: The peer review process at Juniper Publishers, like in many Open-Access Journal Publishers, is a critical part of ensuring the quality and integrity of the research they publish.

When a manuscript is submitted, it first undergoes an initial assessment by the editorial team to ensure it meets the journal’s scope and standards. If it passes this stage, the manuscript is then sent to expert reviewers in the relevant field. These reviewers evaluate the manuscript for its originality, validity, and significance. They may recommend acceptance, revisions, or rejection based on their assessment.

The process is typically double-blind, meaning that both the reviewers and the authors remain anonymous to each other to ensure impartiality. After the reviewers submit their feedback, the editor makes a final decision on the manuscript, considering the reviewers' comments and recommendations.

The peer review process is vital in maintaining the scholarly quality of the content published by Juniper Publishers, ensuring that only robust and high-quality research is disseminated to the academic community and the public.


Q: What is Open Access Publishing?

A: Open access publishing provides unrestricted access to scholarly research, allowing anyone to read, download, and share articles without paywalls. This model supports the dissemination of knowledge and accelerates scientific progress by making research more accessible to a global audience.


Q: What are the Benefits of Open Access for Authors?

A: Authors benefit from open access by gaining wider visibility and higher citation rates. By removing barriers to access, their work reaches a larger audience, including researchers, practitioners, and the general public, which can lead to greater impact and recognition.


Q: How Does Open Access Support the Scientific Community?

A: Open access democratizes knowledge by making research freely available, enabling collaboration and innovation. It supports the global exchange of ideas, helping to bridge gaps between different regions and institutions, and fostering a more inclusive scientific community.


Q: What Role Does Juniper Publishers Play in Open Access Publishing?

A: Juniper Publishers is a leading open access publisher, offering a wide range of journals across various disciplines. Committed to promoting high-quality research, Juniper Publishers provides a platform for researchers to publish their work and reach a global audience without the restrictions of traditional subscription-based journals.


Q: Why Should Researchers Choose Juniper Publishers for Their Open Access Publishing?

A: Researchers should consider Juniper Publishers for their open access publishing needs due to its strong reputation, rigorous peer-review process, and commitment to ethical publishing standards. With a user-friendly submission system and a wide-reaching audience, Juniper Publishers ensures that your research receives the visibility and recognition it deserves.


Q: How Does Open Access Publishing with Juniper Publishers Enhance Your Research Impact?

A: Publishing with Juniper Publishers enhances your research impact by ensuring your work is freely accessible to a global audience. This increases the likelihood of your research being cited, discussed, and applied across various fields, ultimately contributing to greater academic and professional recognition.


Q: What Journals Does Juniper Publishers Offer?

A: Juniper Publishers offers a diverse range of journals covering disciplines such as medicine, engineering, environmental science, and social sciences. Each journal is dedicated to publishing high-quality research and is indexed in major databases, ensuring your work is easily discoverable.


Q: How Can I Submit My Manuscript to Juniper Publishers?

A: Submitting your manuscript to Juniper Publishers is straightforward. Visit their website, select the appropriate journal for your research, and follow the submission guidelines provided. The platform offers a seamless submission process with support available at every step.


Q: Are There Any Fees Associated with Publishing in Juniper Publishers' Open Access Journals?

A: Yes, like most open access publishers, Juniper Publishers charges an Article Processing Charge (APC) to cover the costs of publication. This fee ensures that your research is freely accessible to readers worldwide, with no subscription or access fees.

Q: What Are the Author Guidelines for Submitting Manuscripts to Juniper Publishers?

A: Authors submitting manuscripts to Juniper Publishers must ensure that their work is original and authentic. The manuscript should represent the author's own research or work they have been associated with during their tenure. It is crucial that the results, data, and ideas presented are original and have not been published elsewhere.

Authors must not submit the same manuscript simultaneously to more than one journal, as this can lead to conflicts between journals over publication rights. Additionally, this practice could result in multiple journals unknowingly conducting peer reviews, editing, and publishing the same article, which is inefficient and unethical.

Juniper Publishers strictly prohibits intellectual property theft and plagiarism. Authors are expected to maintain high ethical standards, ensuring that any data or information sourced from external media is properly authorized and cited with permission from the original owner. This commitment to integrity upholds the quality and trustworthiness of the research published by Juniper Publishers.


Q: What Are the Editor Guidelines and Responsibilities at Juniper Publishers?

A: Editors at Juniper Publishers play a crucial role in maintaining the quality and integrity of the journals. Their responsibilities include:

  • Engaging with the Community: Editors are encouraged to seek feedback from associate editors, authors, readers, reviewers, and editorial board members to continuously improve the journal's content.
  • Setting High Standards: The reputation of Juniper Publishers is enhanced by the contribution of eminent editors who are committed to raising the journal's standards whenever possible.
  • Educating Researchers: Editors should actively support initiatives that educate researchers and young scholars about publication policies and ethics, fostering a culture of integrity in research.
  • Welcoming Suggestions: Editorial board members are invited to share their valuable suggestions for the organizational progress of Juniper Publishers.
  • Flexible Review Process: Editors are expected to review submitted manuscripts within their feasible time. If an editor is unable to review a manuscript, they can suggest alternative reviewers.
  • Confidentiality and Consent: Editors must ensure the confidentiality of data related to the task. If a manuscript contains information about specific individuals, particularly in medical or scientific records, the editorial team must secure written consent from those individuals before publication.
  • Timely and Clear Decisions: Editors must make timely editorial decisions and communicate them clearly to the relevant parties.
  • Ensuring Scientific Validity: Editors are responsible for verifying the validity of scientific facts in manuscripts. They should also ensure that the critique of a manuscript is open for all to assess.
  • Assuring Originality: The editorial board must ensure that published content is original. Proper citation and acknowledgment of the original source are essential for the reliability of the author's work.
  • Final Decision Authority: The final decision regarding the modification, acceptance, or rejection of a manuscript rests solely with the editor, ensuring a fair and transparent review process.


Q: What Are the Associate Editor Guidelines and Responsibilities at Juniper Publishers?

A: The Associate Editors at Juniper Publishers play a pivotal role in ensuring the publication of high-quality manuscripts. Their roles and responsibilities include:

  • Quality Assurance: Associate Editors are responsible for overseeing the publication of quality manuscripts on subjects relevant to their expertise.
  • Educational Initiatives: They are encouraged to sustain efforts that educate researchers and young scholars about publication ethics, helping to foster a culture of integrity in academic publishing.
  • Flexible Review Process: Associate Editors can review submitted manuscripts at their convenience. If time constraints prevent them from reviewing, they can suggest alternative reviewers to ensure a smooth and timely review process.
  • Confidentiality and Consent: Associate Editors must maintain the confidentiality of data related to their tasks. If a manuscript involves information about real individuals, particularly in medical or scientific records, the editorial team must secure written consent from these individuals before the information can be published.
  • Ensuring Scientific Validity: They must verify the validity of the scientific facts presented in manuscripts, allowing open critique and discussion to assess the manuscript's quality.
  • Ensuring Originality: The editorial board members, including Associate Editors, must ensure that all published content is original. Proper citation and acknowledgment of the original source are crucial to maintaining the reliability and integrity of the author's work.
  • Welcoming Suggestions: Associate Editors are invited to offer valuable suggestions that contribute to the organizational progress of Juniper Publishers, ensuring continuous improvement and excellence in publication standards.


Q: What is the Plagiarism Policy at Juniper Publishers?

A: Plagiarism is the unauthorized use of another author’s thoughts or work without proper credit, and it is considered a serious academic offense. At Juniper Publishers, all manuscripts submitted to their journals undergo plagiarism scanning to ensure originality. If potential plagiarism is detected, the authors are contacted for clarification.

Authors are expected to submit entirely original works. If they incorporate the work or words of others, they must properly cite the source within their paper using internal citations. Failing to quote, cite, or acknowledge another person's words or ideas correctly constitutes plagiarism. Juniper Publishers considers plagiarism in any form to be unethical publishing behavior and will not tolerate it.

This policy highlights the importance of academic integrity and the commitment of Juniper Publishers to uphold high standards in scholarly publishing.


Q: What is the Peer Review Process at Juniper Publishers?

A: The peer review process at Juniper Publishers, like in many open-access journal publishers, is a critical part of ensuring the quality and integrity of the research they publish.

When a manuscript is submitted, it first undergoes an initial assessment by the editorial team to ensure it meets the journal’s scope and standards. If it passes this stage, the manuscript is then sent to expert reviewers in the relevant field. These reviewers evaluate the manuscript for its originality, validity, and significance. They may recommend acceptance, revisions, or rejection based on their assessment.

The process is typically double-blind, meaning that both the reviewers and the authors remain anonymous to each other to ensure impartiality. After the reviewers submit their feedback, the editor makes a final decision on the manuscript, considering the reviewers' comments and recommendations.

The peer review process is vital in maintaining the scholarly quality of the content published by Juniper Publishers, ensuring that only robust and high-quality research is disseminated to the academic community and the public.

 

*Q: What is an “Open Access Policy”?

A: Open access policies are part of rapidly growing researches in academia to enhance and encourage the new modes and techniques of scholarly publication by providing worldwide free access. Members of universities, schools and departments are establishing open access policies to make their research and scholarship more accessible to scholars, educators, policymakers, students and citizens worldwide.

 


Q: What are the benefits of the Open access policy?.

  • Everyone who is visiting our juniper website can go through the content and can see our published work.
  • The Juniper publishers provide a base for researchers, scholars and professionals to preserve their informative and effective work in its online repository, Academic Commons, and to provide access to that work to anyone who seeks for it.
  • Juniper publishers are a globally accessible repository that facilitates the free exchange of scholarly information worldwide. Scholars all over the globe are welcome to get the benefits of our open access policy of all the research, review, case studies, Opinions, short communications etc. publications of our Journals. To aid in discoverability, materials in the repository are assigned accurate metadata and optimized for discovery via search engines.


Q: Where can we submit our Articles that are to be published?.

A: We welcome all the interested authors to submit their informative articles by filling the below Manuscript Submission Form through online link or by visiting the respected journal page you can get the email ID’s of the concerned Department. Or authors can even directly submit to info@juniperpublishers.com


Q: Where can we submit our Articles that are to be published?.

A: The peer review process plays a vital role for publication in open access journals. In our publishers, all submitted manuscripts are swiftly and fully peer reviewed by our Journal Academic Editors. The review process is double blind and the Editor checks that the manuscript has been prepared according to the necessary protocols. Papers that do not comply with the standard criteria may be rejected. Hence, as it totally depends on the author and reviewer responses, we cannot predict the exact time, but we will try to publish it as feasible as possible.


Q: I am already an editorial board member of a Journal can I apply for several journals at a time?.

A:  Yes, you can. Professionals/Experts in the specific departments can apply to the respected topic journals at a time through our online registration process.


Q: Which types of articles do you accept to publish?.

A:  We gracefully accept all kinds of work to get published in our Journals which include Research article, Review article, Short Communication, Case Report, Mini-Review, Commentary, Opinion, Proceedings, Letters, Special Issues, E-books, Video formats etc.


Q:  Can we get the printed version of our Published work?.

A:  Yes, we do but only on the special interest of the authors, so we call it as “Print on Demand”. Only on request of authors we print the concerned published material of our Journals and we will send it to you. For more details: Click here

If you left with any query that is not clarified here, Do contact to our mail ID : info@juniperpublishers.com

* Q:  Is Juniper Publishers reliable?.

A:  Yes, Juniper Publishers is a reliable and well-regarded academic publishing platform. They offer a wide range of peer-reviewed, open-access journals that cover diverse scientific disciplines. Their focus on high-quality research and a rigorous review process ensures that only valuable, well-vetted scientific contributions are published.

Juniper Publishers is committed to advancing global research by providing an accessible platform for scholars, researchers, and professionals to disseminate their work. They prioritize transparency, rapid publication, and the widespread dissemination of scientific knowledge, making them a trusted name among many in the academic community.

Additionally, their open-access model ensures that research is available to a global audience, increasing visibility and citation potential for authors.

Check all our EG process : https://juniperpublishers.com/editor-guidelines.php


Q:  Is Juniper Publisher a good journal to publish my paper?.

A:  Yes, Juniper Publishers is a great choice for publishing your paper. They offer a range of peer-reviewed, open-access journals across various scientific fields, ensuring that your research reaches a global audience. With a commitment to maintaining high standards of quality, Juniper Publishers ensures that all submissions undergo a rigorous review process, enhancing the credibility and impact of your work.

By publishing with Juniper, you'll benefit from their fast-track publication process, which helps in timely dissemination of your research. Their open-access model also increases the visibility of your work, making it accessible to researchers, scholars, and professionals worldwide, thereby enhancing the chances of higher citations and recognition in your field.

Check all our Testimonials  : https://juniperpublishers.com/testimonials.php


Q:  Is Juniper Publisher-Journal of Gynecology and Women's Health a good journal to publish my paper?.

A:  Yes, Juniper Publishers' Journal of Gynecology and Women's Health is an excellent platform to publish your paper. The journal is known for its focus on high-quality research in the field of gynecology and women's health, making it an ideal venue for scholars and practitioners who wish to contribute valuable insights to this important area of medicine.

The journal follows a rigorous peer-review process, ensuring that only well-researched, credible papers are accepted for publication. Additionally, as an open-access journal, it provides global visibility for your work, making it accessible to researchers, clinicians, and healthcare professionals worldwide. This increased exposure can lead to greater citations and recognition within the academic community.

Publishing with the Journal of Gynecology and Women's Health also means benefiting from their efficient and transparent publication process, allowing your work to be shared in a timely manner. If you are looking to reach a wide and relevant audience in the field, this journal is a solid choice.

Please Check here : https://juniperpublishers.com/jgwh/

Q: Juniper Publishers Impact Factor?.

Agricultural Research & Technology: Open Access Journal (ARTOAJ)

2.372

International Journal of Environmental Sciences & Natural Resources (IJESNR)

2.034

Journal of Gynecology and Womens Health (JGWH)

1.8

Global Journal of Otolaryngology (GJO)

1.952

Cancer Therapy & Oncology International Journal (CTOIJ)

1.774

Orthopedics and Rheumatology Open Access Journal (OROAJ)

1.921

Current Trends in Biomedical Engineering & Biosciences (CTBEB)

2.003

Psychology and Behavioral Science International Journal (PBSIJ)

1.941

Advanced Research in Gastroenterology & Hepatology (ARGH)

1.853

Advances in Dentistry & Oral Health (ADOH)

2.375

Journal of Cardiology & Cardiovascular Therapy (JOCCT)

2.024

Oceanography & Fisheries Open Access Journal (OFOAJ)

1.624

Current Research in Diabetes & Obesity Journal (CRDOJ)

1.760

Advances in Biotechnology & Microbiology (AIBM)

1.874

Journal of Forensic Sciences & Criminal Investigation (JFSCI)

1.971

Journal of Dairy & Veterinary Sciences (JDVS)

1.723

Open Access Journal of Surgery (OAJS)

1.577

Juniper Online Journal of Case Studies (JOJCS)

1.651

Academic Journal of Pediatrics & Neonatology (AJPN)

1.868

Journal of Anesthesia & Intensive Care Medicine (JAICM)

1.798

Open Access Journal of Neurology & Neurosurgery (OAJNN)

1.635

Organic & Medicinal Chemistry International Journal (OMCIJ)

2.165

Journal of Complementary Medicine & Alternative Healthcare (JCMAH)

1.732

Civil Engineering Research Journal (CERJ)

1.792

Nutrition and Food Science International Journal (NFSIJ)

1.839

Global Journal of Archaeology & Anthropology (GJAA)

1.648

Global Journal of Intellectual & Developmental Disabilities (GJIDD)

1.604

Annals of Reviews and Research (ARR)

1.791

JOJ Ophthalmology (JOJO)

1.772

Global Journal of Pharmacy & Pharmaceutical Sciences (GJPPS)

1.735

Biostatistics and Biometrics Open Access Journal (BBOAJ)

2.012

Journal of Physical Fitness, Medicine & Treatment in Sports (JPFMTS)

1.842

Journal of Yoga and Physiotherapy (JYP)

1.694

Global Journal of Reproductive Medicine (GJORM)

1.659

Annals of Social Sciences & Management Studies (ASM)

1.848

JOJ Urology & Nephrology (JOJUN)

1.78

Journal of Pharmacology & Clinical Research (JPCR)

1.781

Juniper Online Journal Material Science (JOJMS)

1.518

Current Trends in Fashion Technology & Textile Engineering (CTFTTE)

1.915

International Journal of Cell Science & Molecular Biology (IJCSMB)

1.689

Anatomy Physiology & Biochemistry International Journal (APBIJ)

1.432

Global Journal of Addiction & Rehabilitation Medicine (GJARM)

1.89

Open Access Journal of Gerontology & Geriatric Medicine (OAJGGM)

1.55

Trends in Technical & Scientific Research (TTSR)

1.589

International Journal of Pulmonary & Respiratory Sciences (IJOPRS)

1.794

Juniper Online Journal of Public Health (JOJPH)

1.782

JOJ Dermatology & Cosmetics (JOJDC)

1.735

Open Access Journal of Toxicology (OAJT)

1.871

Academic Journal of Polymer Science (AJOP)

1.853

Robotics & Automation Engineering Journal (RAEJ)

1.734

Journal of Tumor Medicine & Prevention (JTMP)

1.609

Engineering Technology Open Access Journal (ETOAJ)

1.79

Ecology & Conservation Science: Open Access (ECOA)

1.687

Palliative Medicine & Care International Journal (PMCIJ)

1.879

Insights in Mining Science & Technology (IMST)

1.671

JOJ Horticulture & Arboriculture (JOJHA)

1.831

JOJ Wildlife & Biodiversity (JOJWB)

1.838


Artificial Intelligence System for Value Added Tax Collection via Self Organizing Map (SOM)- Juniper Publishers

  Forensic Sciences & Criminal Investigation - Juniper Publishers Abstract Findings:  Based on our experiments, our approach is an effec...