Institutions
|
About Us
|
Help
|
Gaeilge
0
1000
Home
Browse
Advanced Search
Search History
Marked List
Statistics
A
A
A
Author(s)
Institution
Publication types
Funder
Year
Limited By:
Author = Pearlmutter, Barak A.;
133 items found
Sort by
Title
Author
Item type
Date
Institution
Peer review status
Language
Order
Ascending
Descending
25
50
100
per page
1
2
3
4
5
6
Bibtex
CSV
EndNote
RefWorks
RIS
XML
Displaying Results 76 - 100 of 133 on page 4 of 6
Marked
Mark
Lazy Multivariate Higher-Order Forward-Mode AD
(2007)
Pearlmutter, Barak A.; Siskind, Jeffrey Mark
Lazy Multivariate Higher-Order Forward-Mode AD
(2007)
Pearlmutter, Barak A.; Siskind, Jeffrey Mark
Abstract:
A method is presented for computing all higher-order partial derivatives of a multivariate function Rn → R. This method works by evaluating the function under a nonstandard interpretation, lifting reals to multivariate power series. Multivariate power series, with potentially an infinite number of terms with nonzero coefficients, are represented using a lazy data structure constructed out of linear terms. A complete implementation of this method in SCHEME is presented, along with a straightforward exposition, based on Taylor expansions, of the method’s correctness.
http://mural.maynoothuniversity.ie/2049/
Marked
Mark
Learning state space trajectories in recurrent neural networks
(1987)
Pearlmutter, Barak A.
Learning state space trajectories in recurrent neural networks
(1987)
Pearlmutter, Barak A.
Abstract:
Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding aE/aw,, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wy are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown.
http://mural.maynoothuniversity.ie/5486/
Marked
Mark
Linear program differentiation for single-channel speech separation
(2006)
Pearlmutter, Barak A.; Olsson, Rasmus K.
Linear program differentiation for single-channel speech separation
(2006)
Pearlmutter, Barak A.; Olsson, Rasmus K.
Abstract:
Many apparently difficult problems can be solved by reduction to linear programming. Such problems are often subproblems within larger systems. When gradient optimisation of the entire larger system is desired, it is necessary to propagate gradients through the internally-invoked LP solver. For instance, when an intermediate quantity z is the solution to a linear program involving constraint matrix A, a vector of sensitivities dE/dz will induce sensitivities dE/dA. Here we show how these can be efficiently calculated, when they exist. This allows algorithmic differentiation to be applied to algorithms that invoke linear programming solvers as subroutines, as is common when using sparse representations in signal processing. Here we apply it to gradient optimisation of over complete dictionaries for maximally sparse representations of a speech corpus. The dictionaries are employed in a single-channel speech separation task, leading to 5 dB and 8 dB target-to-interference ratio improve...
http://mural.maynoothuniversity.ie/1376/
Marked
Mark
Linear Program Differentiation For Single-CHannel Speech Separation
(2005)
Pearlmutter, Barak A.; Olsson, Rasmus K.
Linear Program Differentiation For Single-CHannel Speech Separation
(2005)
Pearlmutter, Barak A.; Olsson, Rasmus K.
Abstract:
Many apparently difficult problems can be solved by reduction to linear programming. Such problems are often subproblems within larger systems. When gradient optimisation of the entire larger system is desired, it is necessary to propagate gradients through the internally-invoked LP solver. For instance, when an intermediate quantity z is the solution to a linear program involving constraint matrix A, a vector of sensitivities dE/dz will induce sensitivities dE/dA. Here we show how these can be efficiently calculated, when they exist. This allows algorithmic differentiation to be applied to algorithms that invoke linear programming solvers as subroutines, as is common when using sparse representations in signal processing. Here we apply it to gradient optimisation of overcomplete dictionaries for maximally sparse representations of a speech corpus. The dictionaries are employed in a single-channel speech separation task, leading to 5 dB and 8 dB target-to-interference ratio improvem...
http://mural.maynoothuniversity.ie/565/
Marked
Mark
Linear Spatial Integration for Single-Trial Detection in Encephalography
(2002)
Parra, Lucas C.; Alvino, Chris; Tang, Akaysha; Pearlmutter, Barak A.; Yeung, Nick; Osma...
Linear Spatial Integration for Single-Trial Detection in Encephalography
(2002)
Parra, Lucas C.; Alvino, Chris; Tang, Akaysha; Pearlmutter, Barak A.; Yeung, Nick; Osman, Allen; Sajda, Paul
Abstract:
Conventional analysis of electroencephalography (EEG) and magnetoencephalography (MEG) often relies on averaging over multiple trials to extract statistically relevant differences between two or more experimental conditions. In this article we demonstrate single-trial detection by linearly integrating information over multiple spatially distributed sensors within a predefined time window. We report an average, single- trial discrimination performance of Az � 0.80 and fraction correct between 0.70 and 0.80, across three distinct encephalographic data sets. We restrict our approach to linear integration, as it allows the computation of a spatial distribution of the discriminating component activity. In the present set of experiments the resulting component activity distributions are shown to correspond to the functional neuroanatomy consistent with the task (e.g., contralateral sensory– motor cortex and anterior cingulate). Our work demonstrates how a purely data-driven method for lea...
http://mural.maynoothuniversity.ie/5503/
Marked
Mark
Localization of Independent Components from Magnetoencephalography
(2000)
Tang, Akaysha; Phung, Dan B.; Pearlmutter, Barak A.; Christner, Robert
Localization of Independent Components from Magnetoencephalography
(2000)
Tang, Akaysha; Phung, Dan B.; Pearlmutter, Barak A.; Christner, Robert
Abstract:
Blind source separation (BSS) decomposes a multidimensional time series into a set of sources, each with a one-dimensional time course and a xed spatial distribution. For EEG and MEG, the former corresponds to the simultaneously separated and temporally overlapping signals for continuous non-averaged data; the latter corresponds to the set of attenuations from the sources to the sensors. These sensor projection vectors give information on the spatial locations of the sources. Here we use standard Neuromag dipole-tting software to localize BSS-separated components of MEG data collected in several tasks in which visual, auditory, and somatosensory stimuli all play a role. We found that BSS-separated components with stimulusor motor-locked responses can be localized to physiological and anatomically meaningful locations within the brain.
http://mural.maynoothuniversity.ie/8122/
Marked
Mark
MEG source localization using an MLP with a distributed output representation
(2003)
Jun, Sung Chan; Pearlmutter, Barak A.; Nolte, Guido
MEG source localization using an MLP with a distributed output representation
(2003)
Jun, Sung Chan; Pearlmutter, Barak A.; Nolte, Guido
Abstract:
We present a system that takes realistic magnetoencephalographic (MEG) signals and localizes a single dipole to reasonable accuracy in real time. At its heart is a multilayer perceptron (MLP) which takes the sensor measurements as inputs, uses one hidden layer, and generates as outputs the amplitudes of receptive fields holding a distributed representation of the dipole location. We trained this Soft-MLP on dipolar sources with real brain noise and converted the network's output into an explicit Cartesian coordinate representation of the dipole location using two different decoding strategies. The proposed Soft-MLPs are much more accurate than previous networks which output source locations in Cartesian coordinates. Hybrid Soft-MLP-start-LM systems, in which the Soft-MLP output initializes Levenberg-Marquardt, retained their accuracy of 0.28 cm with a decrease in computation time from 36 ms to 30 ms. We apply the Soft-MLP localizer to real MEG data separated by a blind source s...
http://mural.maynoothuniversity.ie/1386/
Marked
Mark
Mini-symposium on automatic differentiation and its applications in the financial industry
(2017)
Geeraert, Sébastien; Lehalle, Charles-Albert; Pearlmutter, Barak A.; Pironneau, Olivier...
Mini-symposium on automatic differentiation and its applications in the financial industry
(2017)
Geeraert, Sébastien; Lehalle, Charles-Albert; Pearlmutter, Barak A.; Pironneau, Olivier; Reghai, Adil
Abstract:
Automatic differentiation is involved for long in applied mathematics as an alternative to finite difference to improve the accuracy of numerical computation of derivatives. Each time a numerical minimization is involved, automatic differentiation can be used. In between formal derivation and standard numerical schemes, this approach is based on software solutions applying mechanically the chain rule to obtain an exact value for the desired derivative. It has a cost in memory and cpu consumption. For participants of financial markets (banks, insurances, financial intermediaries, etc), computing derivatives is needed to obtain the sensitivity of its exposure to well-defined potential market moves. It is a way to understand variations of their balance sheets in specific cases. Since the 2008 crisis, regulation demand to compute this kind of exposure to many different case, to be sure market participants are aware and ready to face a wide spectrum of configurations. This paper shows ho...
http://mural.maynoothuniversity.ie/10228/
Marked
Mark
Monaural Source Separation Using Spectral Cues
(2004)
Pearlmutter, Barak A.; Zador, Anthony M.
Monaural Source Separation Using Spectral Cues
(2004)
Pearlmutter, Barak A.; Zador, Anthony M.
Abstract:
The acoustic environment poses at least two important challenges. First, animals must localise sound sources using a variety of binaural and monaural cues; and second they must separate sources into distinct auditory streams (the “cocktail party problem”). Binaural cues include intra-aural intensity and phase disparity. The primary monaural cue is the spectral filtering introduced by the head and pinnae via the head-related transfer function (HRTF), which imposes different linear filters upon sources arising at different spatial locations. Here we address the second challenge, source separation. We propose an algorithm for exploiting the monaural HRTF to separate spatially localised acoustic sources in a noisy environment. We assume that each source has a unique position in space, and is therefore subject to preprocessing by a different linear filter. We also assume prior knowledge of weak statistical regularities present in the sources. This framework can incorporate various aspec...
http://mural.maynoothuniversity.ie/8119/
Marked
Mark
MouldingNet: Deep-Learning for 3D Object Reconstruction
(2019)
Burns, Tobias; Pearlmutter, Barak A.; McDonald, John
MouldingNet: Deep-Learning for 3D Object Reconstruction
(2019)
Burns, Tobias; Pearlmutter, Barak A.; McDonald, John
Abstract:
With the rise of deep neural networks a number of approaches for learning over 3D data have gained popularity. In this paper, we take advantage of one of these approaches, bilateral convolutional layers to propose a novel end-to-end deep auto-encoder architecture to efficiently encode and reconstruct 3D point clouds. Bilateral convolutional layers project the input point cloud onto an even tessellation of a hyperplane in the $(d+1)$-dimensional space known as the permutohedral lattice and perform convolutions over this representation. In contrast to existing point cloud based learning approaches, this allows us to learn over the underlying geometry of the object to create a robust global descriptor. We demonstrate its accuracy by evaluating across the shapenet and modelnet datasets, in order to illustrate 2 main scenarios, known and unknown object reconstruction. These experiments show that our network generalises well from seen classes to unseen classes.
http://mural.maynoothuniversity.ie/10988/
Marked
Mark
Multimodal Integration: fMRI, MRI, EEG, MEG
(2005)
Halchenko, Yaroslav O.; Hanson, Stephen Jose; Pearlmutter, Barak A.
Multimodal Integration: fMRI, MRI, EEG, MEG
(2005)
Halchenko, Yaroslav O.; Hanson, Stephen Jose; Pearlmutter, Barak A.
Abstract:
This chapter provides a comprehensive survey of the motivations, assumptions and pitfalls associated with combining signals such as fMRI with EEG or MEG. Our initial focus in the chapter concerns mathematical approaches for solving the localization problem in EEG and MEG. Next we document the most recent and promising ways in which these signals can be combined with fMRI. Specically, we look at correlative analysis, decomposition techniques, equivalent dipole tting, distributed sources modeling, beamforming, and Bayesian methods. Due to difculties in assessing ground truth of a combined signal in any realistic experiment difculty further confounded by lack of accurate biophysical models of BOLD signal we are cautious to be optimistic about multimodal integration. Nonetheless, as we highlight and explore the technical and methodological difculties of fusing heterogeneous signals, it seems likely that correct fusion of multimodal data will allow previously inaccessible spatiotemporal ...
http://mural.maynoothuniversity.ie/562/
Marked
Mark
Nesting forward-mode AD in a functional framework
(2008)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Nesting forward-mode AD in a functional framework
(2008)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Abstract:
We discuss the augmentation of a functional-programming language with a derivative-taking operator implemented with forward-mode automatic differentiation (AD). The primary technical difficulty in doing so lies in ensuring correctness in the face of nested invocation of that operator, due to the need to distinguish perturbations introduced by distinct invocations. We exhibit a series of implementations of areferentially-transparent forward-mode-AD derivative-taking operator, each of which uses a different non-referentially-transparent mechanism to distinguish perturbations. Even though the forward-mode-AD derivative-taking operator is itself referentially transparent, we hypothesize that one cannot correctly formulate this operator as a function definition in current pure dialects of Haskell.
http://mural.maynoothuniversity.ie/1731/
Marked
Mark
Nesting forward-mode AD in a functional framework
(2008)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Nesting forward-mode AD in a functional framework
(2008)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Abstract:
We discuss the augmentation of a functional-programming language with a derivative-taking operator implemented with forward-mode automatic differentiation (AD). The primary technical difficulty in doing so lies in ensuring correctness in the face of nested invocation of that operator, due to the need to distinguish perturbations introduced by distinct invocations. We exhibit a series of implementations of a referentially-transparent forward-mode-AD derivative-taking operator, each of which uses a different non-referentially-transparent mechanism to distinguish perturbations. Even though the forward-mode-AD derivative-taking operator is itself referentially transparent, we hypothesize that one cannot correctly formulate this operator as a function definition in current pure dialects of Haskell.
http://mural.maynoothuniversity.ie/10236/
Marked
Mark
Neuronal Predictions of Sparse Linear Representations
(2005)
Pearlmutter, Barak A.; Asari, Hiroki; Zador, Anthony M.
Neuronal Predictions of Sparse Linear Representations
(2005)
Pearlmutter, Barak A.; Asari, Hiroki; Zador, Anthony M.
Abstract:
A striking feature of many sensory processing problems is that there appear to be many more neurons engaged in the internal representations of the signal than in its transduction. For example, humans have about 30,000 cochlear neurons, but at least a thousand times as many neurons in the auditory cortex. Such apparently redundant internal representations have sometimes been proposed as necessary to overcome neuronal noise. We instead posit that they directly subserve computations of interest. We first review how sparse overcomplete linear representations can be used for source separation, using a particularly difficult case, the HRTF cue (the differential filtering imposed on a source by its path from its origin to the cochlea) as an example. We then explore some robust and generic predictions about neuronal representations that follow from taking sparse linear representations as a model of neuronal sensory processing.
http://mural.maynoothuniversity.ie/1987/
Marked
Mark
Nonlinear time series analysis of human alpha rhythm
(2002)
Nolte, G; Sander, T; Lueschow, A; Pearlmutter, Barak A.
Nonlinear time series analysis of human alpha rhythm
(2002)
Nolte, G; Sander, T; Lueschow, A; Pearlmutter, Barak A.
Abstract:
Nonlinearity is often deduced by showing that a dataset signi£cantly deviates from its phase randomized versions, i.e. surrogate data. For real data, however, non-stationarities like artifacts and onsets and offsets of rhythmic activity will cause false positives. We propose a new test which detects dynamical nonlinearity by measuring time-asymmetry, using surrogate data merely to estimate the standard deviation of the process. The method is applied to multi-channel MEG measurements of ongoing alpha-band activity modulated by a simple visual memory task involving motor activity. The signal to noise ratio was enhanced using ICA, and the analysis was performed on a single separated source. We found that, if the peak at 10 Hz is accompanied by a substantial higher harmonic, time asymmetry can be detected signifcantly in virtually any epoch of 3 second duration. Finally, we applied our recently proposed method to estimate correlation dimension for noisy data. We found very satisfactory ...
http://mural.maynoothuniversity.ie/1420/
Marked
Mark
Nonnegative Factorization of a Data Matrix as a Motivational Example for Basic Linear Algebra
(2017)
Pearlmutter, Barak A.; Šmigoc, Helena
Nonnegative Factorization of a Data Matrix as a Motivational Example for Basic Linear Algebra
(2017)
Pearlmutter, Barak A.; Šmigoc, Helena
Abstract:
We present a motivating example for matrix multiplication based on factoring a data matrix. Traditionally, matrix multiplication is motivated by applications in physics: composing rigid transformations, scaling, sheering, etc. We present an engaging modern example which naturally motivates a variety of matrix manipulations, and a variety of different ways of viewing matrix multiplication. We exhibit a low-rank non-negative decomposition (NMF) of a "data matrix" whose entries are word frequencies across a corpus of documents. We then explore the meaning of the entries in the decomposition, find natural interpretations of intermediate quantities that arise in several different ways of writing the matrix product, and show the utility of various matrix operations. This example gives the students a glimpse of the power of an advanced linear algebraic technique used in modern data science.
http://mural.maynoothuniversity.ie/10254/
Marked
Mark
Oaklisp: An Object-Oriented Dialect of Scheme
(1988)
Lang, Kevin J.; Pearlmutter, Barak A.
Oaklisp: An Object-Oriented Dialect of Scheme
(1988)
Lang, Kevin J.; Pearlmutter, Barak A.
Abstract:
This paper contains a description of Oaklisp, a dialect of Lisp incorporating lexical scoping, multiple inheritance, and first-class types. This description is followed by a revisionist history of the Oaklisp design, in which a crude map of the space of object-oriented Lisps is drawn and some advantages of first-class types are explored. Scoping issues are discussed, with a particular emphasis on instance variables and top-level namespaces. The question of which should come first, the lambda or the object, is addressed, with Oaklisp providing support for the latter approach.
http://mural.maynoothuniversity.ie/10239/
Marked
Mark
Oaklisp: an object-oriented scheme with first class types
(1986)
Lang, Kevin; Pearlmutter, Barak A.
Oaklisp: an object-oriented scheme with first class types
(1986)
Lang, Kevin; Pearlmutter, Barak A.
Abstract:
The Scheme papers demonstrated that lisp could be made simpler and more expressive by elevating functions to the level of first class objects. Oaklisp shows that a message based language can derive similar benefits from having first class types.
http://mural.maynoothuniversity.ie/10240/
Marked
Mark
On the Calculation of the l2→l1 Induced Matrix Norm
(2009)
Drakakis, Konstantinos; Pearlmutter, Barak A.
On the Calculation of the l2→l1 Induced Matrix Norm
(2009)
Drakakis, Konstantinos; Pearlmutter, Barak A.
Abstract:
We show that the l2 → l1 induced matrix norm, namely the norm induced by the l2 and l1 vector norms in the domain and range space, respectively, can be calculated as the maximal element of a finite set involving discrete additive combinations of the rows of the involved matrix with weights of ±1; the number of elements this set contains is exponential in the number of rows involved. A geometric interpretation of the result allows us to extend the result to some other induced norms. Finally, we generalize the findings to bounded linear operators on separable Banach spaces that can be obtained as strong limits of sequences of finite-dimensional linear operators.
http://mural.maynoothuniversity.ie/8176/
Marked
Mark
Optimal Coding Predicts Attentional Modulation of Activity in Neural Systems
(2007)
Jaramillo, Santiago; Pearlmutter, Barak A.
Optimal Coding Predicts Attentional Modulation of Activity in Neural Systems
(2007)
Jaramillo, Santiago; Pearlmutter, Barak A.
Abstract:
Neuronal activity in response to a fixed stimulus has been shown to change as a function of attentional state, implying that the neural code also changes with attention. We propose an information-theoretic account of such modulation: that the nervous system adapts to optimally encode sensory stimuli while taking into account the changing relevance of different features. We show using computer simulation that such modulation emerges in a coding system informed about the uneven relevance of the input features. We present a simple feedforward model that learns a covert attention mechanism, given input patterns and coding fidelity requirements. After optimization, the system gains the ability to reorganize its computational resources (and coding strategy) depending on the incoming attentional signal, without the need of multiplicative interaction or explicit gating mechanisms between units. The modulation of activity for different attentional states matches that observed in a variety of...
http://mural.maynoothuniversity.ie/1307/
Marked
Mark
Perturbation Confusion and Referential Transparency: Correct Functional Implementation of Forward-Mode AD
(2005)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Perturbation Confusion and Referential Transparency: Correct Functional Implementation of Forward-Mode AD
(2005)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Abstract:
It is tempting to incorporate dierentiation operators into functional-programming languages. Making them rst-class citizens, however, is an enterprise fraught with danger. We discuss a potential problem with forward-mode AD common to many AD systems, including all attempts to integrate a forward-mode AD operator into Haskell. In particular, we show how these implementations fail to preserve referential transparency, and can compute grossly incorrect results when the dierentiation operator is applied to a function that itself uses that operator. The underlying cause of this problem is perturbation confusion, a failure to distinguish between distinct perturbations introduced by distinct invocations of the dierentiation operator. We then discuss how perturbation confusion can be avoided.
http://mural.maynoothuniversity.ie/566/
Marked
Mark
Playing the Matching-Shoulders Lob-Pass Game with Logarithmic Regret
(1994)
Kilian, Joe; Lang, Kevin J.; Pearlmutter, Barak A.
Playing the Matching-Shoulders Lob-Pass Game with Logarithmic Regret
(1994)
Kilian, Joe; Lang, Kevin J.; Pearlmutter, Barak A.
Abstract:
The best previous algorithm for the matching shoulders lob-pass game, Abe and Takeuchi's (1993) ARTHUR, suffered O(t 1=2 ) regret. We prove that this is the best possible performance for any algorithm that works by accurately estimating the opponent's payoff lines. Then we describe an algorithm which beats that bound and meets the information-theoretic lower bound of O(log t) regret by converging to the best lob rate without accurately estimating the payoff lines. The noise-tolerant binary search procedure that we develop is of independent interest.
http://mural.maynoothuniversity.ie/8136/
Marked
Mark
Progress in Blind Separation of Magnetoencephalographic Data
(2003)
Pearlmutter, Barak A.; Jaramillo, Santiago
Progress in Blind Separation of Magnetoencephalographic Data
(2003)
Pearlmutter, Barak A.; Jaramillo, Santiago
Abstract:
The match between the physics of MEG and the assumptions of the most well developed blind source separation (BSS) algorithms (unknown instantaneous linear mixing process, many sensors compared to expected recoverable sources, large data limit) have tempted researchers to apply these algorithms to MEG data. We review some of these efforts, with particular emphasis on our own work.
http://mural.maynoothuniversity.ie/8140/
Marked
Mark
Putting the Automatic Back into AD: Part I, What’s Wrong (CVS: 1.1)
(2008)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Putting the Automatic Back into AD: Part I, What’s Wrong (CVS: 1.1)
(2008)
Siskind, Jeffrey Mark; Pearlmutter, Barak A.
Abstract:
Current implementations of automatic differentiation are far from automatic. We survey the difficulties encountered when applying four existing AD systems, ADIFOR, TAPENADE, ADIC, and FADBAD++, to two simple tasks, minimax optimization and control of a simulated physical system, that involve taking derivatives of functions that themselves take derivatives of other functions. ADIC is not able to perform these tasks as it cannot transform its own generated code. Using FADBAD++, one cannot compute derivatives of different orders with unmodified code, as needed by these tasks. One must either manually duplicate code for the different derivative orders or write the code using templates to automate such code duplication. ADIFOR and TAPENADE are both able to perform these tasks only with significant intervention: modification of source code and manual editing of generated code. A companion paper presents a new AD system that handles both tasks without any manual intervention yet performs a...
http://mural.maynoothuniversity.ie/8152/
Marked
Mark
Putting the Automatic Back into AD: Part II, Dynamic, Automatic, Nestable, and Fast (CVS: 1.1)
(2008)
Pearlmutter, Barak A.; Siskind, Jeffrey Mark
Putting the Automatic Back into AD: Part II, Dynamic, Automatic, Nestable, and Fast (CVS: 1.1)
(2008)
Pearlmutter, Barak A.; Siskind, Jeffrey Mark
Abstract:
This paper discusses a new AD system that correctly and automatically accepts nested and dynamic use of the AD operators, without any manual intervention. The system is based on a new formulation of AD as highly generalized first-class citizens in a ń-calculus, which is briefly described. Because the ń-calculus is the basis for modern programminglanguage implementation techniques, integration of AD into the ń-calculus allows AD to be integrated into an aggressive compiler. We exhibit a research compiler which does this integration, and uses some novel analysis techniques to accept code involving free dynamic use of nested AD operators, yet performs as well as or better than the most aggressive existing AD systems.
http://mural.maynoothuniversity.ie/8162/
Displaying Results 76 - 100 of 133 on page 4 of 6
1
2
3
4
5
6
Bibtex
CSV
EndNote
RefWorks
RIS
XML
Item Type
Book chapter (15)
Conference item (29)
Journal article (66)
Report (23)
Peer Review Status
Peer-reviewed (107)
Non-peer-reviewed (26)
Year
2019 (1)
2018 (3)
2017 (2)
2016 (6)
2015 (3)
2014 (4)
2013 (8)
2012 (2)
2011 (2)
2009 (3)
2008 (15)
2007 (8)
2006 (9)
2005 (9)
2004 (7)
2003 (6)
2002 (9)
2001 (2)
2000 (6)
1999 (3)
1998 (2)
1996 (4)
1995 (3)
1994 (4)
1993 (2)
1991 (2)
1990 (2)
1989 (1)
1988 (2)
1987 (1)
1986 (2)
built by Enovation Solutions