Intelligent Systems in Accounting, Finance and Management

Publisher:
Wiley
Publication date:
2021-02-01
ISBN:
1055-615X

Issue Number

Latest documents

  • Journal entry anomaly detection model

    Summary Although numerous scientific papers have been written on deep learning, very few have been written on the exploitation of such technology in the field of accounting or bookkeeping. Our scientific study is oriented exactly toward this specific field. As accountants, we know the problems faced in modern accounting. Although accountants may have a plethora of information regarding technology support, looking for errors or fraud is a demanding and time‐consuming task that depends on manual skills and professional knowledge. Our efforts are oriented toward resolving the problem of error‐detection automation that is currently possible through new technologies, and we are trying to develop a web application that will alleviate the problems of journal entry anomaly detection. Our developed application accepts data from one specific enterprise resource planning system while also representing a general software framework for other enterprise resource planning developers. Our web application is a prototype that uses two of the most popular deep‐learning architectures; namely, a variational autoencoder and long short‐term memory. The application was tested on two different journals: data set D, learned on accounting journals from 2007 to 2018 and then tested during the year 2019, and data set H, learned on journals from 2014 to 2016 and then tested during the year 2017. Both accounting journals were generated by micro entrepreneurs.

  • RegTech—the application of modern information technology in regulatory affairs: areas of interest in research and practice

    Summary We provide a high‐level view on topics addressed in scientific articles about regulatory technology (RegTech), with a particular focus on technologies used. For this purpose, we first explore different denominations for RegTech and derive search queries to search relevant literature portals. From the hits of that information retrieval process, we select 55 articles outlining the application of information technology in regulatory affairs with an emphasis on the financial sector. In comparison, we examine the technological scope of 347 RegTech companies and compare our findings with the scientific literature. Our research reveals that ‘compliance management’ is the most relevant topic in practice, and ‘risk management’ is the primary subject in research. The most significant technologies as of today are ‘artificial intelligence’ and distributed ledger technologies such as ‘blockchain’.

  • Issue Information

    No abstract is available for this article.

  • The digital future of internal staffing: A vision for transformational electronic human resource management

    Summary Through an international Delphi study, this article explores the new electronic human resource management regimes that are expected to transform internal staffing. Our focus is on three types of information systems: human resource management systems, job portals, and talent marketplaces. We explore the future potential of these new systems and identify the key challenges for their implementation in governments, such as inadequate regulations and funding priorities, a lack of leadership and strategic vision, together with rigid work policies and practices and a change‐resistant culture. Tied to this vision, we identify several areas of future inquiry that bridge the divide between theory and practice.

  • Modelling unbalanced catastrophic health expenditure data by using machine‐learning methods

    Summary This study aims to compare the performances of logistic regression and random forest classifiers in a balanced oversampling procedure for the prediction of households that will face catastrophic out‐of‐pocket (OOP) health expenditure. Data were derived from the nationally representative household budget survey collected by the Turkish Statistical Institute for the year 2012. A total of 9,987 households returned valid surveys. The data set was highly imbalanced, and the percentage of households facing catastrophic OOP health expenditure was 0.14. Balanced oversampling was performed, and 30 artificial data sets were generated with sizes of 5% and 98% of the original data size. The balanced oversampled data set provided accurate predictions, and random forest exhibited superior performance in identifying households facing catastrophic OOP health expenditure (area under the receiver operating characteristic curve, AUC = 0.8765; classification accuracy, CA = 0.7936; sensitivity = 0.7765; specificity = 0.8552; F1 = 0.7797).

  • Issue Information

    No abstract is available for this article.

  • A neural‐network‐based decision‐making model in the peer‐to‐peer lending market

    Summary This study proposes an investment recommendation model for peer‐to‐peer (P2P) lending. P2P lenders usually are inexpert, so helping them to make the best decision for their investments is vital. In this study, while we aim to compare the performance of different artificial neural network (ANN) models, we evaluate loans from two perspectives: risk and return. The net present value (NPV) is considered as the return variable. To the best of our knowledge, NPV has been used in few studies in the P2P lending context. Considering the advantages of using NPV, we aim to improve decision‐making models in this market by the use of NPV and the integration of supervised learning and optimization algorithms that can be considered as one of our contributions. In order to predict NPV, three ANN models are compared concerning mean square error, mean absolute error, and root‐mean‐square error to find the optimal ANN model. Furthermore, for the risk evaluation, the probability of default of loans is computed using logistic regression. Investors in the P2P lending market can share their assets between different loans, so the procedure of P2P investment is similar to portfolio optimization. In this context, we minimize the risk of a portfolio for a minimum acceptable level of return. To analyse the effectiveness of our proposed model, we compare our decision‐making algorithm with the output of a traditional model. The experimental results on a real‐world data set show that our model leads to a better investment concerning both risk and return.

  • Trend‐cycle Estimation Using Fuzzy Transform and Its Application for Identifying Bull and Bear Phases in Markets

    Summary This paper is focused on one of the fundamental problems in financial time‐series analysis; namely, the identification of the historical bull and bear phases. We start with the proof that the trend‐cycle can be well estimated using the technique of a higher degree fuzzy transform. Then, we suggest a mathematical definition of the bull and bear phases and provide a novel technique for their identification. As a consequence, the turning points (i.e. the points where the market changes its phase) are detected. We illustrate our methodology on several examples.

  • Tick size and market quality: Simulations based on agent‐based artificial stock markets

    Summary This paper investigates the way that minimum tick size affects market quality based on an agent‐based artificial stock market. Our results indicate that stepwise and combination systems can promote market quality in certain aspects, compared with a uniform system. A minimal combination system performed the best to improve market quality. This is the first study to analyse tick size systems that remain at the theory stage and compare four types of system under the same experimental environment. The results suggests that a minimal combination system could be considered a new direction for market policy reform to improve market quality.

  • A Google–Wikipedia–Twitter Model as a Leading Indicator of the Numbers of Coronavirus Deaths

    Summary Forecasting the number of cases and the number of deaths in a pandemic provides critical information to governments and health officials, as seen in the management of the coronavirus outbreak. But things change. Thus, there is a constant search for real‐time and leading indicator variables that can provide insights into disease propagation models. Researchers have found that information about social media and search engine use can provide insights into the diffusion of flu and other diseases. Consistent with this finding, we found that a model with the number of Google searches, Twitter tweets, and Wikipedia page views provides a leading indicator model of the number of people in the USA who will become infected and die from the coronavirus. Although we focus on the current coronavirus pandemic, other recent viruses have threatened pandemics (e.g. severe acute respiratory syndrome). Since future and existing diseases are likely to follow a similar search for information, our insights may prove fruitful in dealing with the coronavirus and other such diseases, particularly in the early phases of the disease. Subject terms: coronavirus, COVID‐19, unintentional crowd, Google searches, Wikipedia page views, Twitter tweets, models of disease diffusion.

Featured documents

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT