Title
Abstract
Reference
Comments / downloads
Time sensitivity and self-organisation in Multi-recurrent Neural Networks
Model optimisation is a key step in model development and traditionally this was limited to parameter tuning. However, recent developments and enhanced understanding of internal dynamics of model architectures have led to various exploration to optimise and enhance performance through model extension and development. In this paper, we extend the architecture of the Multi-recurrent Neural Network (MRN) to incorporate self-learning recurrent link ratios and periodically attentive hidden units. We contrast and show the superiority of these extensions to the standard MRN for a complex financial prediction task. The superiority is attributed to i) the ability of the self-learning recurrent link ratios to dynamically utilise data to identify optimal parameters of its memory mechanism and ii) the periodically attentive units enabling the hidden layer capture temporal features that are sensitive to different periods of time. Finally, we evaluate our extended MRNs (Self-Learning MRN (SL-MRN) and Periodically Attentive MRN (PA-MRN)), against two current state-of the-art models (Long-Short Term Memory and Support Vector Machines) for an eye state detection task. Our preliminary results demonstrate that the PA-MRN and SL-MRN outperform both state-of-the-art models. These results demonstrate that the MRN extensions are suitable models for machine learning applications and these findings will be further explored.
Orojo, O., Tepper, J. A., Mcginnity, T. M., and Mahmud, M., (2020) Time sensitivity and self-organisation in Multi-recurrent Neural Networks. In Proceedings of
International Joint Conference on Neural Networks (IJCNN),
July 2020,
Title
Abstract
Reference
Comments / downloads
On the importance of sluggish state memory for learning long term dependency
The vanishing gradients problem inherent in Simple Recurrent Networks (SRN) trained with back-propagation, has led to a significant shift towards the use of Long Short-term Memory (LSTM) and Echo State Networks (ESN), which overcome this problem through either second order error-carousel schemes or different learning algorithms respectively. This paper re-opens the case for SRN-based approaches, by considering a variant, the Multi-recurrent Network (MRN). We show that memory units embedded within its architecture can ameliorate against the vanishing gradient problem, by providing variable sensitivity to recent and more historic information through layer- and self-recurrent links with varied weights, to form a so-called sluggish state-based memory. We demonstrate that an MRN, optimised with noise injection, is able to learn the long term dependency within a complex grammar induction task, significantly outperforming the SRN, NARX and ESN. Analysis of the internal representations of the networks, reveals that sluggish state-based representations of the MRN are best able to latch on to critical temporal dependencies spanning variable time delays, to maintain distinct and stable representations of all underlying grammar states. Surprisingly, the ESN was unable to fully learn the dependency problem, suggesting the major shift towards this class of models may be premature.
Tepper, J. A., Shertil, M. S., and Powell, H. M (2016) On the importance of sluggish state memory for learning long term dependency. Knowl. Based Syst., 96:104–114, March 2016
Title
Abstract
Reference
Comments / downloads
Extracting finite structure from infinite language
This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map with laterally interconnected neurons. A derivation of functional-equivalence theory is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.
McQueen, T., Hopgood, A.A., Allen, T.J. and Tepper, J.A. , 2005. Extracting finite structure from infinite language. Knowledge-Based Systems, 18 (4-5), pp. 135-141. ISSN 0950-7051
Title
Abstract
Reference
Comments / downloads
Detecting Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network
In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, as well as empirical research. Despite a large number of emerging scientific studies to address the problem, existing methods are limited in several ways, such as the lack of comparative evaluations which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and long short term memory networks, and conducts an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available datasets to date. We show that our proposed method outperforms state of the art on 6 out of 7 datasets by between 0.2 and 13.8 points in F1. We also carry out further analysis using automatic feature selection to understand the impact of the conventional manual feature engineering process that distinguishes most methods in this field. Our findings challenge the existing perception of the importance of feature engineering, as we show that: the automatic feature selection algorithm drastically reduces the original feature space by over 90% and selects predominantly generic features from datasets; nevertheless, machine learning algorithms perform better using automatically selected features than the original features.
Zhang, Z., Robinson, D. and Tepper, J., (2018) Detecting hate speech on twitter using a convolution-GRU based deep neural network. In The Semantic Web: Proceedings of the 15th European Semantic Web Conference (ESWC 2018), Heraklion, Crete, Greece, 3-7 June 2018. Lecture notes in computer science, 10843 . Cham, Switzerland: Springer, pp. 745-760. ISBN 9783319934167
Title
Abstract
Reference
Comments / downloads
Hate Speech Detection on Twitter: Feature Engineering v.s. Feature Selection
The increasing presence of hate speech on social media has drawn significant investment from governments, companies, and empirical research. Existing methods typically use a supervised text classification approach that depends on carefully engineered features. However, it is unclear if these features contribute equally to the performance of such methods. We conduct a feature selection analysis in such a task using Twitter as a case study, and show findings that challenge conventional perception of the importance of manual feature engineering: automatic feature selection can drastically reduce the carefully engineered features by over 90% and selects predominantly generic features often used by many other language related tasks; nevertheless, the resulting models perform better using automatically selected features than carefully crafted task-specific features.
Robinson, D., Zhang, Z. and Tepper, J. (2018) Hate
speech detection on Twitter : feature engineering v.s. feature selection. In The Semantic Web: ESWC 2018 Satellite Events. ESWC: European
Semantic Web Conference, 03-07 Jun 2018, Crete, Greece. Springer , pp. 46-49. ISBN
9783319981918
https://doi.org/10.1007/978-3-319-98192-5_9
Title
Abstract
Reference
Comments / downloads
Extracting finite structure from infinite language
This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map with laterally interconnected neurons. A derivation of functional-equivalence theory is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.
McQueen, T., Hopgood, A.A., Allen, T.J. and Tepper, J.A. , 2005. Extracting finite structure from infinite language. Knowledge-Based Systems, 18 (4-5), pp. 135-141. ISSN 0950-7051
Title
Abstract
Reference
Comments / downloads
A corpus-based connectionist architecture for large-scale natural language parsing
We describe a deterministic shift-reduce parsing model that combines the advantages of connectionism with those of traditional symbolic models for parsing realistic sub-domains of natural language. It is a modular system that learns to annotate natural language texts with syntactic structure. The parser acquires its linguistic knowledge directly from pre-parsed sentence examples extracted from an annotated corpus. The connectionist modules enable the automatic learning of linguistic constraints and provide a distributed representation of linguistic information that exhibits tolerance to grammatical variation. The inputs and outputs of the connectionist modules represent symbolic information which can be easily manipulated and interpreted and provide the basis for organizing the parse. Performance is evaluated using labelled precision and recall. (For a test set of 4128 words, precision and recall of 75% and 69%, respectively, were achieved.) The work presented represents a significant step towards demonstrating that broad coverage parsing of natural language can be achieved with simple hybrid connectionist architectures which approximate shift-reduce parsing behaviours. Crucially, the model is adaptable to the grammatical framework of the training corpus used and so is not predisposed to a particular grammatical formalism.
Tepper, J.A., Powell, H.M., and Palmer-Brown,D (2002) A corpus-based connectionist architecture for large-scale natural language parsing, Connect. Sci. 14 (2) (2002)
93–114 2002.
Title
Abstract
Reference
Comments / downloads
Connectionist natural language parsing
The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar rules. This review also considers the extent to which connectionist parsers offer computational models of human sentence processing and provide plausible accounts of psycholinguistic data. In considering these issues, special attention is paid to the level of realism, the nature of the modularity, and the type of processing that is to be found in a wide range of parsers.
Palmer-Brown, D., Tepper, J. A., and Powell, H. M (2002)
Connectionist natural language parsing. Trends in Cognitive Sciences, 2002, 6 (10), 437-442 Jan 1, 2002
Title
Abstract
Reference
Comments / downloads
Sluggish State-Based Neural Networks Provide State-of-the-art Forecasts of Covid-19 Cases
At the time of writing, the Covid-19 pandemic is continuing to spread across the globe with more than 135 million confirmed cases and 2.9 million deaths across nearly 200 countries. The impact on global economies has been significant. For example, the Office for National Statistics reported that the UK’s unemployment level increased to 5% and the headline GDP declined by 9.9%, which is more than twice the fall in 2009 due to the financial crisis. It is therefore paramount for governments and policymakers to understand the spread of the disease, patient mortality rates and the impact of their interventions on these two factors. A number of researchers have subsequently applied various state-of-the-art forecasting models, such as long short-term memory models (LSTMs), to the problem of forecasting future numbers of Covid-19 cases (confirmed, deaths) with varying levels of success. In this paper, we present a model from the simple recurrent network class, The Multi-recurrent network (MRN), for predicting the future trend of Covid-19 confirmed and deaths cases in the United States. The MRN is a simple yet powerful alternative to LSTMs, which utilises a unique sluggish state-based memory mechanism. To test this mechanism, we first applied the MRN to predicting monthly Covid-19 cases between Feb 2020 to July 2020, which includes the first peak of the pandemic. The MRN is then applied to predicting cases on a weekly basis from late Feb 2020 to late Dec 2020 which includes two peaks. Our results show that the MRN is able to provide superior predictions to the LSTM with significantly fewer adjustable parameters. We attribute this performance to its robust sluggish state memory, lower model complexity and open up the case for simpler alternative models to the LSTM.
Orojo, O., Tepper, J. A., Mcginnity, T. M., and Mahmud, M., (2021) Sluggish State-Based Neural Networks Provide State-of-the-art Forecasts of Covid-19 Cases . In Proceedings of 1st International Conference on Applied Intelligence and Informatics, AII 2021 ; 1435:384-400, 2021. Publisher: Springer International Publishing
Title
Abstract
Reference
Comments / downloads
A Multi-recurrent Network for Crude Oil Price Prediction
Crude oil is fundamental for global growth and stability. The factors influencing crude oil prices and more generally, the oil market, are well known to be dynamic, volatile and evolving. Subsequently, crude oil prediction is a complex and notoriously difficult task. In this paper, we evaluate the Multi-recurrent Network (MRN), a simple yet powerful recurrent neural network, for oil price forecasting at various forecast horizons. Although similar models, such as Long Short-Term Memory (LSTM) networks, have shown some success in this domain, the MRN is a comparatively simplified neural network model which exhibits complex state-based memories that are both flexible and rigid. We evaluate the MRN against the standard Feedforward Multilayered Perceptron (FFMLP) and the Simple Recurrent Network (SRN) in addition to the current state-of-the-art LSTM for specifically modelling the shocks in oil prices caused by the financial crisis. The in-sample data consists of key indicator variables sampled across the pre-financial crisis period (July 1969 to September 2003) and the out-sample data used to evaluate the models, is before, during and beyond the crisis (October 2003 to March 2015). We show that such simple sluggish state-based models are superior to the FFMLP, SRN and LSTM models. Furthermore, the MRN appears to have discovered important latent features embedded within the input signal five years prior to the 2008 financial crisis. This suggests that the indicator variables could provide Central Banks and governments with early warning indicators of impending financial perturbations which we consider an invaluable finding and worthy of further exploration.
Orojo, O., Tepper, J. A., Mcginnity, T. M., and Mahmud, M., (2019) A Multi-recurrent Network for Crude Oil Price Prediction Networks. 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 2940-2945, doi: 10.1109/SSCI44817.2019.9002841.
Title
Abstract
Reference
Comments / downloads
Does money matter in inflation forecasting?
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression – techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.
Binner J.M., Tino P, Tepper J.A., Anderson R.G, Jones B and Kendall G. (2010) Does money matter in inflation forecasting? Physica A, Statistical Mechanics and its Applications, 389, 4793-4808. ISSN 0378-4371.
Title
Abstract
Reference
Comments / downloads
Predictable non-linearities in U.S. inflationPredictable non-linearities in U.S. inflation
This paper compares the out-of-sample inflation forecasting performance of two non-linear models; a neural network and a Markov switching autoregressive (MS-AR) model. We find that predictable non-linearities in inflation are best accounted for by the MS-AR model.
Binner, J., Elgar, T., Nilsson, B. and Tepper, J.,(2006) Predictable non-linearities in U.S. inflation. Economics Letters, 93 (3), pp. 323-328. ISSN 0165-1765
Title
Abstract
Reference
Comments / downloads
Tools for non-linear time series forecasting in economics – an empirical comparison of regime switching vector autoregressive models and recurrent neural networks
The purpose of this study is to contrast the forecasting performance of two non-linear models, a regime-switching vector autoregressive model (RS-VAR) and a recurrent neural network (RNN), to that of a linear benchmark VAR model. Our specific forecasting experiment is U.K. inflation and we utilize monthly data from 1969 to 2003. The RS-VAR and the RNN perform approximately on par over both monthly and annual forecast horizons. Both non-linear models perform significantly better than the VAR model.
Binner, J., Elgar, T., Nilsson, B. and Tepper, J., (2004) Advances in Economics: Applications of Artificial Intelligence in Finance and Economics, 2004, (19), 71-91 • Jan 1, 2004
Dr Tepper and his internationally renown research collaborators have also presented their macro-economic research at the following conferences:
Binner, J., Tepper, J., and Kelly, L (2017) On the robustness of sluggish state-based neural networks for providing useful insight into the New Keynesian Phillips curve. To appear in Fourth International Conference of the Society for Economic Measurement, Samberg Center at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, USA, July 26-28, 2017
Binner, J., Tepper, J., and Kelly, L (2017) On the robustness of sluggish state-based neural networks for providing useful insight into the output gap. Bank of England Conference on “Financial Services Indices, Liquidity and Economic Activity”, Bank of England, London, 22-24 May 2017.
Binner, J., Tepper, J., and Kelly, L (2016) Multi-stage learning of US price levels using forecasts from sluggish state-based neural networks. Third International Conference of the Society for Economic Measurement Electra Palace Hotel, Thessaloniki, Greece, 6-9th July 2016.
Binner, J., Kelly, L., Tepper, J., and Chauvet. M (2015) Forecasting Macroeconomic Time Series: A Comparison of Regime Switching and Recurrent Neural Network Methods. Second International Conference of the Society for Economic Measurement, OECD Conference Centre, 2 rue André Pascal 75016 Paris, France, 22-24th July 2015.
Belongia, M., Binner, J., Tepper J and Kelly, L (2014) Were the Central Banks Correct in Abandoning Monetary Aggregates as Targets? Comparative Evidence from the USA and Switzerland. First International Conference of The Society for Economic Measurement, University of Chicago, Chicago, USA 18-20 August 2014.
Title
Abstract
Reference
Comments / downloads
Deep Mining from Omics Data
Since the advent of high-throughput omics technologies, various molecular data such as genes, transcripts, proteins, and metabolites have been made widely available to researchers. This has afforded clinicians, bioinformaticians, statisticians, and data scientists the opportunity to apply their innovations in feature mining and predictive modeling to a rich data resource to develop a wide range of generalizable prediction models. What has become apparent over the last 10 years is that researchers have adopted deep neural networks (or “deep nets”) as their preferred paradigm of choice for complex data modeling due to the superiority of performance over more traditional statistical machine learning approaches, such as support vector machines. A key stumbling block, however, is that deep nets inherently lack transparency and are considered to be a “black box” approach. This naturally makes it very difficult for clinicians and other stakeholders to trust their deep learning models even though the model predictions appear to be highly accurate. In this chapter, we therefore provide a detailed summary of the deep net architectures typically used in omics research, together with a comprehensive summary of the notable “deep feature mining” techniques researchers have applied to open up this black box and provide some insights into the salient input features and why these models behave as they do. We group these techniques into the following three categories:
(a) hidden layer visualization and interpretation;
(b) input feature importance and impact evaluation;
(c) output layer gradient analysis.
While we find that omics researchers have made some considerable gains in opening up the black box through interpretation of the hidden layer weights and node activations to identify salient input features, we highlight other approaches for omics researchers, such as employing deconvolutional network-based approaches and development of bespoke attribute impact measures to enable researchers to better understand the relationships between the input data and hidden layer representations formed and thus the output behavior of their deep nets.
Alzubaidi A., Tepper J. (2022) Deep Mining from Omics Data. In: Carugo O., Eisenhaber F. (eds) Data Mining Techniques for the Life Sciences. Methods in Molecular Biology, vol 2449. Humana, New York, NY. https://doi.org/10.1007/978-1-0716-2095-3_15
Title
Abstract
Reference
Comments / downloads
A novel deep mining model for effective knowledge discovery from omics data
Knowledge discovery from omics data has become a common goal of current approaches to personalised cancer medicine and understanding cancer genotype and phenotype. However, high-throughput biomedical datasets are characterised by high dimensionality and relatively small sample sizes with small signal-to-noise ratios. Extracting and interpreting relevant knowledge from such complex datasets therefore remains a significant challenge for the fields of machine learning and data mining. In this paper, we exploit recent advances in deep learning to mitigate against these limitations on the basis of automatically capturing enough of the meaningful abstractions latent with the available biological samples. Our deep feature learning model is proposed based on a set of non-linear sparse Auto-Encoders that are deliberately constructed in an under-complete manner to detect a small proportion of molecules that can recover a large proportion of variations underlying the data. However, since multiple projections are applied to the input signals, it is hard to interpret which phenotypes were responsible for deriving such predictions. Therefore, we also introduce a novel weight interpretation technique that helps to deconstruct the internal state of such deep learning models to reveal key determinants underlying its latent representations. The outcomes of our experiment provide strong evidence that the proposed deep mining model is able to discover robust biomarkers that are positively and negatively associated with cancers of interest. Since our deep mining model is problem-independent and data-driven, it provides further potential for this research to extend beyond its cognate disciplines.
Alzubaidi, A., Tepper, J. and Lotfi, A., (2020) A novel deep mining model for effective knowledge discovery from omics data. Artificial Intelligence in Medicine, 104: 101821. ISSN 0933-3657
Also see:
Alzubaidi, A. , Tepper, J. and Lotfi, A. , (2020). Deep mining for determining cancer biomarkers. HealthManagement.org – The Journal, 20 (6), pp. 462-464. ISSN 1377-7629
Title
Abstract
Reference
Comments / downloads
Calibrating Educational Design Quality by Integrating Theory and Practice Using Neural Auto‐encoders
This paper introduces a neurally-inspired approach to learning design that seeks to address the gap of lack of measurement and adaptability in current learner design systems by implementing Tepper’s quantitative measure of constructive alignment and associating module design patterns with their student satisfaction levels to calibrate alignment measures. The model, we call Educational Design Intelligence Tool (EDIT), consists of a neural auto-encoder which is trained on 519 design patterns spanning 476 modules from the STEM discipline. Neural auto-encoders learn to output the module design pattern that is on its input layer and thus form a perfect memory of the training patterns. The intention here is that if trained with only design patterns eliciting high levels of student satisfaction it will form an internal memory of ‘good’ design patterns and therefore during testing, when presented with design patterns having low student satisfaction (or ‘poor’ design patterns), it will attempt to associate it with one or more (or combination thereof) the nearest matching ‘good’ design patterns it has encoded. We infer this as the model making suggestions to ‘enhance’ the module design and thus attract higher level of student satisfaction. Furthermore, we apply statistical analysis and self-organising maps to visualise and evaluate the changes recommended by the neural auto-encoder to identify the underlying ‘design preferences it has discovered from the data.
EDIT, with its data-orientated and adaptive approach to design, reveals orthodox practices whilst revealing some unexpected incongruity between alignment theory and design practice.
Bafail, A., Tepper, J., and Liggett, A (2017) Calibrating Educational Design Quality by Integrating Theory and Practice Using Neural Auto‐encoders. In the proceedings of 24th Annual Conference of the Association for Learning Technology (ALT-C), 5 – 7 September 2017, University of Liverpool, UK
Title
Abstract
Reference
Comments / downloads
A Computational Intelligence Tool to Support the Design of Outcome-Based Teaching
The use of models, frameworks, and toolkits in learning design serve to support teaching practitioners in producing well-structured learning designs for students. However, existing learning design tools inherently lack an objective metric system which is able to measure the degree to which an educational design is well-formed according to either the principles of constructive alignment or more generally design practices that students find satisfactory in practice. Such a metric system, that could integrate measures of educational theory and practice, would enable teaching practitioners to make more informed design decisions such as which profile of activities/assessments to use for a particular set of learning outcomes. This paper presents the first computational intelligence tool that measures educational design quality in a way that is underpinned by both the theoretical principles of constructive alignment and how it is used in practice. Furthermore, the alignment metrics computed are calibrated by student satisfaction scores to promote those structures that are preferred in practice rather than from a theoretical standpoint thus offering more pragmatic and realistic design solutions.
Bafail, A., Tepper, J., and Liggett, A (2017) A Computational Intelligence Tool to Support the Design of Outcome-Based Teaching. To appear in Proceedings of Canada International Conference on Education (CICE-2017), University of Toronto Mississauga, 26-29 June 2017.
Title
Abstract
Reference
Comments / downloads
Assessment for learning systems analysis and design using constructivist technique
This paper describes an innovative approach to assessment design that enables first year undergraduate students to learn Systems Analysis and Design concepts in a way that is relevant to them. The approach is inherently constructivist from two standpoints. Firstly, common group knowledge of board games is used as a means for learning subject-specific knowledge and secondly, concept mapping is used to enable groups to visualise and evolve their understanding over time. The preliminary results reveal a significant impact on student achievement and also strong student feedback in support of the approach.
Tepper, J (2014) Assessment for learning systems analysis and design using constructivist technique. Proceedings of Third HEA STEM Annual Learning and Teaching Conference, University of Edinburgh on 30 April – 1 May 2014.
Title
Abstract
Reference
Comments / downloads
Module assessment: assessment, content, standards alignment and grade integrityModule assessment: assessment, content, standards alignment and grade integrity
An exploratory exercise considered the design of exams in modules and alignment. Interviews with six module leaders in different disciplines revealed good practices around exam design. However, the discussions also elicited potential threats to grade integrity. These are categorised as:
• lack of transparency in the alignment of learning outcomes and standards;
• unassessed or sparse assessment of some learning outcomes in comparison to others;
• exam design, specially where optional questions exist, was found to pose some challenges to grade integrity as questions some times were not found to be of same level of difficulty;
• extraneous factors to the exam design, such as organisation of the delivery in tutorials and previous coverage of content, were equally seen to challenge grade integrity.
Overall, the small scale study has revealed that professional development around assessment design and alignment of standards needs to pay much closer attention to:
• the relation between learning outcomes and definition of criteria and level descriptors
• the relation between assessment design and delivery factors (e.g. previously seen or rehearsed questions as opposed to unseen questions)
• procedures and guidance to support checks, perhaps at design stages, of alignment and due coverage of learning outcomes in the assessment.
Tomas, C., Thomas, G., and Tepper, J (2014) Module assessment: assessment, content, standards alignment and grade integrityModule assessment: assessment, content, standards alignment and grade integrity
SIG1 conference • Aug 29, 2014SIG1 conference • Aug 29, 2014
Title
Abstract
Reference
Comments / downloads
Connecting scholarship to teaching: The merits of a scientific interdisciplinary approach
The paper examines the merits of a development methodology for promoting Scholarship of Teaching and Learning and engendering an academic learning community within an interdisciplinary setting. A case study which depicts low levels of engagement due to the common resistors associated with unfunded projects is reviewed. A second case study with a revised methodology illustrates how an effective support and evaluation framework for a sabbatical scheme has been adopted across a number of science-based disciplines. A force field analysis reveals the value of this approach for developing academic learning communities whose activities move beyond Ashwin and Trigwell’s level one SoTL.
Liggett, A. and Tepper, J. A. (2010) Connecting scholarship to teaching: The merits of a scientific interdisciplinary approach.
The London Scholarship of Teaching and Learning 8th International Conference Proceedings 2010 Disciplines, Pedagogies and Cultures for SoTL, Volume 5, University of West London, ISBN: 978-0-9569534-0-7
Also see:
Tepper, J., McNeil, J. and Liggett, A. (2006) Connecting scholarship to teaching: a method of
iterative refinement . In J. Fanghanel and D. Warren Scholarship of Teaching and
Learning (SoTL), 6th Annual International Conference, City University, London, UK, 18-19
May 2006.
Title
Abstract
Reference
Comments / downloads
Measuring constructive alignment: an alignment metric to guide good practice
We present a computational model that represents and computes the level to which an educational design is constructively aligned. The model is able to provide ‘alignment metrics’ for both holistic and individual aspects of a programme or module design.
A systemic and structural perspective of teaching and learning underpins the design of the computational model whereby Bloom’s taxonomy is used as a basis for categorising the core components of a teaching system and some basic principles of generative linguistics are borrowed for representing alignment structures and relationships. The degree of alignment is computed using Set theory and linear algebra. The model presented forms the main processing framework of a software tool currently being developed to facilitate teachers to systematically and consistently produce constructively aligned programmes of teaching and learning. It is envisaged that the model will have broad appeal as it allows the quality of educational designs to be measured and works on the principle of ‘practice techniques’ and ‘learning elicited’ as opposed to content.
Tepper, J. A (2006). Measuring constructive alignment: an alignment metric to guide good practice,” in 1st UK Workshop on Constructive Alignment, Higher Education Academy Information and Computer Sciences (ICS), Subject Centre and Nottingham Trent University, UK
Title
Abstract
Reference
Comments / downloads
Snap-Drift: Real-time, Performance-guided Learning
A novel approach for real-time learning and mapping
of patterns using an external performance indicator is described. The learning makes use of the ‘snap-drift’ algorithm based on the concept of fast, convergent, minimalist learning (snap) when the overall network performance has been poor and slower, cautious learning (drift towards user request input patterns)
when the performance has been good, in a non-stationary environment where new patterns are being introduced over time. Snap is based on Adaptive Resonance; and drift is based on Learning Vector Quantization. The two are combined in a semi-supervised system that shifts its learning style whenever it receives a change in performance feedback. The learning is capable of rapidly relearning and restabilising, according to changes in feedback or patterns. We have used this algorithm in the design of a modular neural network system, known as Performance-guided Adaptive Resonance Theory (PART). Simulation results show that it discovers alternative solutions in response to a significantly changed situation, in terms of the input vectors (patterns) and/or of the environment, which may require the patterns to be treated differently over time.
Lee S W, Palmer-Brown D, Tepper J and Roadknight C M. (2003). Snap-Drift: real-time performance-guided learning. Proceedings of the International Joint Conference on Neural Networks (IJCNN), Portland, Oregon, USA.
Title
Abstract
Reference
Comments / downloads
Performance-guided Neural Network for Rapidly Self-Organising Active Network
Management
A neural network architecture is introduced for the real-time learning of input sequences using external performance feedback. The target problem domain
suggests the use of Adaptive Resonance Theory (ART) networks that are able to function in a robust and fast real-time adaptive active network environment, where user requests and new proxylets (services) are constantly being introduced over time. The architecture learns, self-organises and self-stabilises in response to the user requests and maps the requests according to the types of proxylets available. However, the ART1 architecture and the original algorithm are modified to incorporate an external feedback mechanism whereby the performance of the system is fed into the network periodically. This modification, namely the ‘snap-drift’
algorithm, uses fast convergent, minimalist learning (snap) when the overall network performance has been poor and slow learning (drift towards user request input patterns) when the performance has been good. A key concern of the research is to devise a mechanism that effectively searches for alternative solutions to the ones that have already been tried, guided simultaneously by the input data (bottom-up information) and the performance feedback (top-down information). Preliminary simulations evaluate the two-tiered architecture using a simple operating environment consisting of simulated training and test data.
Lee, S. W., Palmer-Brown, D., Tepper, J., and Roadknight, C. M., (2002), Performance-guided neural network for rapidly self-organising active network management. In Proceedings of the 2nd International Conference on Hybrid Intelligent Systems (HIS2002), Chile, 2002.
Title
Abstract
Reference
Comments / downloads
Performance-guided neural network for self-organising network management
A neural network architecture is introduced for real-time learning of input sequences using external performance feedback. Some aspects of Adaptive Resonance Theory (ART) networks are applied because they are able to function in a fast real-time adaptive active network environment where user requests and new proxylets (services) are constantly being introduced over time. The architecture learns, self-organis es and self-stabilises in response to user requests, mapping the requests according to the types of proxylets available. However, in
order make the neural networks respond to performance feedback, we introduce a modification to the original ART1 network in the form of the ‘snap-drift’ algorithm, that uses fast convergent, minimalist learning (snap) when the overall network performance is poor, and slow learning (drift towards user request input pattern) when the performance is good. Preliminary simulations evaluate the two-tiered architecture using a simple operating environment consisting of simulated training and test data.
Lee, S. W., Palmer-Brown, D., Tepper, J., and Roadknight, C. M., (2002) Performance-guided neural network for self-organising network management. In Proceedings of the London Communications Symposium, LCS 2002, 9 – 10 September 2002, London, pp 269 – 272.
Title
Abstract
Reference
Comments / downloads
Automated software quality visualisation using fuzzy logic techniques
In the past decade there has been a concerted effort by the software industry to improve the quality of its products. This has led to the inception of various techniques with which to control and measure the process involved in software development. Methods like the Capability Maturity Model have introduced processes and strategies that require measurement in the form of software metrics. With the ever increasing number of software metrics being introduced by capability based processes, software development organisations are finding it more difficult to understand and interpret metric scores. This is particularly problematic for senior management and project managers where analysis of the actual data is not feasible. This paper proposes a method with which to visually represent metric scores so that managers can easily see how their organisation is performing relative to quality goals set for each type of metric. Acting primarily as a proof of concept and prototype, we suggest ways in which real customer needs can be translated into a feasible technical solution. The solution itself visualises metric scores in the form of a tree structure and utilises Fuzzy Logic techniques, XGMML, Web Services and the .NET Framework. Future work is proposed to extend the system from the prototype stage and to overcome a problem with the masking of poor scores.
Senior, J., Allison, I. and Tepper, J. (2007) Automated software quality visualisation using fuzzy logic techniques. Communication of the IIMA, 7 (1), pp. 25-40. ISSN 1543-5970