Intelligent Systems in Accounting, Finance and Management

- Publisher:
- Wiley
- Publication date:
- 2021-02-01
- ISBN:
- 1055-615X
Issue Number
Latest documents
- Journal entry anomaly detection model
Summary Although numerous scientific papers have been written on deep learning, very few have been written on the exploitation of such technology in the field of accounting or bookkeeping. Our scientific study is oriented exactly toward this specific field. As accountants, we know the problems faced in modern accounting. Although accountants may have a plethora of information regarding technology support, looking for errors or fraud is a demanding and time‐consuming task that depends on manual skills and professional knowledge. Our efforts are oriented toward resolving the problem of error‐detection automation that is currently possible through new technologies, and we are trying to develop a web application that will alleviate the problems of journal entry anomaly detection. Our developed application accepts data from one specific enterprise resource planning system while also representing a general software framework for other enterprise resource planning developers. Our web application is a prototype that uses two of the most popular deep‐learning architectures; namely, a variational autoencoder and long short‐term memory. The application was tested on two different journals: data set D, learned on accounting journals from 2007 to 2018 and then tested during the year 2019, and data set H, learned on journals from 2014 to 2016 and then tested during the year 2017. Both accounting journals were generated by micro entrepreneurs.
- RegTech—the application of modern information technology in regulatory affairs: areas of interest in research and practice
Summary We provide a high‐level view on topics addressed in scientific articles about regulatory technology (RegTech), with a particular focus on technologies used. For this purpose, we first explore different denominations for RegTech and derive search queries to search relevant literature portals. From the hits of that information retrieval process, we select 55 articles outlining the application of information technology in regulatory affairs with an emphasis on the financial sector. In comparison, we examine the technological scope of 347 RegTech companies and compare our findings with the scientific literature. Our research reveals that ‘compliance management’ is the most relevant topic in practice, and ‘risk management’ is the primary subject in research. The most significant technologies as of today are ‘artificial intelligence’ and distributed ledger technologies such as ‘blockchain’.
- Issue Information
No abstract is available for this article.
- The digital future of internal staffing: A vision for transformational electronic human resource management
Summary Through an international Delphi study, this article explores the new electronic human resource management regimes that are expected to transform internal staffing. Our focus is on three types of information systems: human resource management systems, job portals, and talent marketplaces. We explore the future potential of these new systems and identify the key challenges for their implementation in governments, such as inadequate regulations and funding priorities, a lack of leadership and strategic vision, together with rigid work policies and practices and a change‐resistant culture. Tied to this vision, we identify several areas of future inquiry that bridge the divide between theory and practice.
- Modelling unbalanced catastrophic health expenditure data by using machine‐learning methods
Summary This study aims to compare the performances of logistic regression and random forest classifiers in a balanced oversampling procedure for the prediction of households that will face catastrophic out‐of‐pocket (OOP) health expenditure. Data were derived from the nationally representative household budget survey collected by the Turkish Statistical Institute for the year 2012. A total of 9,987 households returned valid surveys. The data set was highly imbalanced, and the percentage of households facing catastrophic OOP health expenditure was 0.14. Balanced oversampling was performed, and 30 artificial data sets were generated with sizes of 5% and 98% of the original data size. The balanced oversampled data set provided accurate predictions, and random forest exhibited superior performance in identifying households facing catastrophic OOP health expenditure (area under the receiver operating characteristic curve, AUC = 0.8765; classification accuracy, CA = 0.7936; sensitivity = 0.7765; specificity = 0.8552; F1 = 0.7797).
- Issue Information
No abstract is available for this article.
- A neural‐network‐based decision‐making model in the peer‐to‐peer lending market
Summary This study proposes an investment recommendation model for peer‐to‐peer (P2P) lending. P2P lenders usually are inexpert, so helping them to make the best decision for their investments is vital. In this study, while we aim to compare the performance of different artificial neural network (ANN) models, we evaluate loans from two perspectives: risk and return. The net present value (NPV) is considered as the return variable. To the best of our knowledge, NPV has been used in few studies in the P2P lending context. Considering the advantages of using NPV, we aim to improve decision‐making models in this market by the use of NPV and the integration of supervised learning and optimization algorithms that can be considered as one of our contributions. In order to predict NPV, three ANN models are compared concerning mean square error, mean absolute error, and root‐mean‐square error to find the optimal ANN model. Furthermore, for the risk evaluation, the probability of default of loans is computed using logistic regression. Investors in the P2P lending market can share their assets between different loans, so the procedure of P2P investment is similar to portfolio optimization. In this context, we minimize the risk of a portfolio for a minimum acceptable level of return. To analyse the effectiveness of our proposed model, we compare our decision‐making algorithm with the output of a traditional model. The experimental results on a real‐world data set show that our model leads to a better investment concerning both risk and return.
- Trend‐cycle Estimation Using Fuzzy Transform and Its Application for Identifying Bull and Bear Phases in Markets
Summary This paper is focused on one of the fundamental problems in financial time‐series analysis; namely, the identification of the historical bull and bear phases. We start with the proof that the trend‐cycle can be well estimated using the technique of a higher degree fuzzy transform. Then, we suggest a mathematical definition of the bull and bear phases and provide a novel technique for their identification. As a consequence, the turning points (i.e. the points where the market changes its phase) are detected. We illustrate our methodology on several examples.
- Tick size and market quality: Simulations based on agent‐based artificial stock markets
Summary This paper investigates the way that minimum tick size affects market quality based on an agent‐based artificial stock market. Our results indicate that stepwise and combination systems can promote market quality in certain aspects, compared with a uniform system. A minimal combination system performed the best to improve market quality. This is the first study to analyse tick size systems that remain at the theory stage and compare four types of system under the same experimental environment. The results suggests that a minimal combination system could be considered a new direction for market policy reform to improve market quality.
- A Google–Wikipedia–Twitter Model as a Leading Indicator of the Numbers of Coronavirus Deaths
Summary Forecasting the number of cases and the number of deaths in a pandemic provides critical information to governments and health officials, as seen in the management of the coronavirus outbreak. But things change. Thus, there is a constant search for real‐time and leading indicator variables that can provide insights into disease propagation models. Researchers have found that information about social media and search engine use can provide insights into the diffusion of flu and other diseases. Consistent with this finding, we found that a model with the number of Google searches, Twitter tweets, and Wikipedia page views provides a leading indicator model of the number of people in the USA who will become infected and die from the coronavirus. Although we focus on the current coronavirus pandemic, other recent viruses have threatened pandemics (e.g. severe acute respiratory syndrome). Since future and existing diseases are likely to follow a similar search for information, our insights may prove fruitful in dealing with the coronavirus and other such diseases, particularly in the early phases of the disease. Subject terms: coronavirus, COVID‐19, unintentional crowd, Google searches, Wikipedia page views, Twitter tweets, models of disease diffusion.
Featured documents
- Features selection, data mining and finacial risk classification: a comparative study
Summary The aim of this paper is to compare several predictive models that combine features selection techniques with data mining classifiers in the context of credit risk assessment in terms of accuracy, sensitivity and specificity statistics. The t‐statistic, Battacharrayia statistic, the area...
- A pattern‐based approach to extract REA value models from business process models
Summary Business models are economic models that describe the rationale of why organizations create and deliver value. These models focus on what organizations offer and why. Business process models capture business activities and the ways in which they are accomplished (i.e. their coordination)....
- MICRO CREDIT RISK METRICS: A COMPREHENSIVE REVIEW
SUMMARY Default modelling is a general term used for several interrelated fields of risk management. Bond defaults, credit (loan) defaults, firm defaults and country defaults are examples of this kind. The scope and reason for existence of this study is to focus mainly on firm default. The purpose...
- Lottery Payment Cards: A Study of Mental Accounting
Summary This study analyses the difficulties of using stored‐value cards for noncash payment adoption and payment framing behaviour development. This study applies the Rasch model via mental accounting theory to identify unobservable and latent difficulties in adopting noncash payment instruments...
- Using clustering ensemble to identify banking business models
Summary The business models of banks are often seen as the result of a variety of simultaneously determined managerial choices, such as those regarding the types of activities, funding sources, level of diversification, and size. Moreover, owing to the fuzziness of data and the possibility that...
- Do Sentiments Matter in Fraud Detection? Estimating Semantic Orientation of Annual Reports
Summary We present a novel approach for analysing the qualitative content of annual reports. Using natural language processing techniques we determine if sentiment expressed in the text matters in fraud detection. We focus on the Management Discussion and Analysis (MD&A) section of annual reports...
- Management of Knowledge Sources Supported by Domain Ontologies: Building and Construction Case Studys
Summary This paper introduces a novel conceptual framework to support the creation of knowledge representations based on enriched semantic vectors, using the classical vector space model approach extended with ontological support. This work is focused on collaborative engineering projects where...
- TSFDC: A trading strategy based on forecasting directional change
Summary Directional Change (DC) is a technique to summarize price movements in a financial market. According to the DC concept, data is sampled only when the magnitude of price change is significant according to the investor. In this paper, we develop a contrarian trading strategy named TSFDC....
- Assessing Systemic Importance With a Fuzzy Logic Inference System
Summary Three metrics are designed to assess Colombian financial institutions' size, connectedness and non‐substitutability as the main drivers of systemic importance: (i) centrality as net borrower in the money market network; (ii) centrality as payments originator in the large‐value payment...
- Natural Language Processing in Accounting, Auditing and Finance: A Synthesis of the Literature with a Roadmap for Future Research
Summary Natural language processing (NLP) is a part of the artificial intelligence domain focused on communication between humans and computers. NLP attempts to address the inherent problem that while human communications are often ambiguous and imprecise, computers require unambiguous and precise...