OPUS RSS FeedOPUS documents
https://opus.jacobs-university.de/index/index/
Sat, 10 Nov 2018 17:04:35 +0200Sat, 10 Nov 2018 17:04:35 +0200A Consistent Method for Learning OOMs from Asymptotically Stationary Time Series Data Containing Missing Values
https://opus.jacobs-university.de/frontdoor/index/index/docId/826
In the traditional framework of spectral learning of stochastic time series models, model parameters are estimated based on trajectories of fully recorded observations. However, real-world time series data often contain missing values, and worse, the distributions of missingness events over time are often not independent of the visible process. Recently, a spectral OOM learning algorithm for time series with missing data was introduced and proved to be consistent, albeit under quite strong conditions. Here we refine the algorithm and prove that the original strong conditions can be very much relaxed. We validate our theoretical findings by numerical experiments, showing that the algorithm can consistently handle missingness patterns whose dynamic interacts with the visible process.Tianlin Liuworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/826Thu, 11 Oct 2018 17:04:35 +0200Resting State EEG Classification for Motor Learning Skills Using Echo State Networks
https://opus.jacobs-university.de/frontdoor/index/index/docId/759
EEG records the electrical activities from the scalp surface via electrodes. As a
modern medical imaging technique, it has been proven to be useful in many different
fields. Clinical diagnosis, psychotherapy, brain-computer interfaces and
the pharmaceutical industry all have benefited from the insights that one can
glean from EEG measurements.
However, there exist various difficulties such as uniqueness of individuals, large
volume of data and influences of artifacts that prevent us from extracting useful
information from those measurements, and thus more involved analytical tools
are needed. Recurrent Neural Networks are particularly suitable for dealing
with EEG because these networks can capture the critical spatiotemporal characteristics
that EEG contains.
In this project, we successfully applied Echo State Networks to classify the
people’s motor learning skills, given the resting state EEG recording. We also
discovered some evidence for the existence of different neurological groups with
respect to people’s motor learning skills.Hang Yuan; Claudia Voelcker-Rehage; Lena Huebner; Benjamin Goddeworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/759Thu, 02 Nov 2017 10:50:21 +0100Overcoming Catastrophic Interference by Conceptors
https://opus.jacobs-university.de/frontdoor/index/index/docId/734
Catastrophic interference has been a major roadblock in the research of con- tinual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided back-prop” (CAB), in which gradients are shielded by concep- tors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catas- trophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint MNIST task CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed in the deep learning field.Xu He; Herbert Jaegerworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/734Thu, 12 Oct 2017 09:13:55 +0200Unconventional Information Processing Systems, Novel Hardware: A Tour d'Horizon
https://opus.jacobs-university.de/frontdoor/index/index/docId/749
This report provides a wide-angle survey on computational paradigms which have a possible bearing on the development of unconventional computational substrates and hardware devices. Such unconvential substrates and devices have some prop- erties that alienate them from the classical Turing model of computation. Among other challenging characteristics, they are non-digital, unclocked, low-precision, exhibit static and dynamic parameter drift, and may have limited lifetime. Such properties are shared with biological computing systems – brains, but not only brains – , so this survey includes ideas and insights from neuroscience and the natural computing field.Fatemeh Hadaeghi; Xu He; Herbert Jaegerworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/749Thu, 12 Oct 2017 09:12:46 +0200SQL Support for Multidimensional Arrays
https://opus.jacobs-university.de/frontdoor/index/index/docId/723
Multidimensional arrays represent a core underlying structure of manifold science and engineering data. It is generally recognized today, therefore, that arrays have an essential role in Big Data and should become an integral part of the overall data type orchestration in information systems. This Technical Report discusses the support for Multidimensional Arrays (MDA) as defined in ISO9075-15.Dimitar Misev; Peter Baumannworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/723Thu, 20 Jul 2017 15:59:25 +0200An Exact Method for Vehicle Routing and Truck Driver Scheduling Problems
https://opus.jacobs-university.de/frontdoor/index/index/docId/689
In most developed countries working hours of truck drivers are con- strained by hours of service regulations. When optimizing vehicle routes, trucking companies must consider these constraints in order to assure that drivers can comply with the regulations. This paper studies the combined vehicle routing and truck driver scheduling problem (VRTDSP), which generalizes the well-known vehicle-routing problem with time windows by considering working hour constraints. A branch-and-price algorithm for solving the VRTDSP with U.S. hours of service regulations is presented. This is the first algorithm that solves the VRTDSP to proven optimality.Asvin Goel; Stefan Irnichworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/689Tue, 03 Jan 2017 14:41:43 +0100Smart Decisions by Small Adjustments: Iterating Denoising Autoencoders
https://opus.jacobs-university.de/frontdoor/index/index/docId/688
An iterative neural architecture based on repeated application of the Denoising Autoencoder is introduced. The architecture is placed in the family of other approaches involving networks of simple units and iteration at the exploitation stage. It is shown that repeated feeding of a pattern to a Denoising Autoencoder often yields non-trivial sensible improvements of the pattern. This statement is supported by a classification experiment, in which the data transformed by our architecture is shown to be more linearly separable than the original samples.Dzmitry Bahdanau; Herbert Jaegerworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/688Tue, 03 Jan 2017 14:36:18 +0100Hours of service regulations in the United States and the 2013 rule change
https://opus.jacobs-university.de/frontdoor/index/index/docId/687
This paper studies the revised hours of service regulations for truck drivers in the United States which will enter into force in July 2013. It provides a detailed model of the new regulation and presents and a new simulation-based method to assess the impact of the rule change on operational costs and road safety. Unlike previous methodologies, the new methodology takes into account that carriers can use optimization as a tool to minimize the economic impact of stricter regulations. Simulation experiments are conducted indicating that the monetized safety benefit of reducing the daily driving time limits is on the same order of magnitude compared to the additional operational costs.Asvin Goelworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/687Tue, 03 Jan 2017 14:29:16 +0100Fast time scale modulation of pattern generators realized by Echo State Networks
https://opus.jacobs-university.de/frontdoor/index/index/docId/686
There are many attempts in the field of robotics and artificial intelligence towards achieving a well-functioning architecture that can control an agent in a stable and elegant manner, while allowing for mixing of behaviors. However, state of the arts robots are far away from the coordinated, complex and graceful behavior that we can observe in animals. A popular approach towards tackling complex motor control problems is the employment of pattern generators. Several architectures have shown good performance, modulating and switching between different patterns. However, those methods perform gradual changes rather than the fast and yet smooth modulations of a pattern for a short duration of time, characteristic for reflexive actions. In this report I present a practical study of a technique that achieves changes in amplitude, frequency and shift of a periodic signal in the time frame of less than a period.Jordan Ivanchevworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/686Tue, 03 Jan 2017 14:20:10 +0100Minimal Energy Control of an ESN Pattern Generator
https://opus.jacobs-university.de/frontdoor/index/index/docId/685
In this report we present a method of adding a feedback control mechanism to an echo state network (ESN) pattern generator in order to modulate its output patterns with the purpose of tracking slowly varying control targets, e.g. shift, amplitude, or frequency of an oscillatory pattern. A proofof- principle case study is presented where a basic ESN is trained to produce a stable sinewave oscillation with fixed shift, amplitude and frequency. With the controller in place, the system demonstrates that the shift, amplitude and frequency of the produced sine waveform can be modulated simultaneously by suitably generated slow varying control signals inserted into the network. Furthermore, an equilibration procedure is introduced to relearn ESN weights such that the equilibrated ESN pattern generator can approximately reproduce the reservoir dynamics across the controllable range, with the feedback control loop switched off. As a result, when reconnecting the feedback control loop to the equilibrated ESN, the energy of the control signals are many orders of magnitude smaller compared to the native system.Jiwen Li; Herbert Jaegerworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/685Tue, 03 Jan 2017 14:13:28 +0100On self-organizing reservoirs and their hierarchies
https://opus.jacobs-university.de/frontdoor/index/index/docId/684
Current advances in reservoir computing have demonstrated that fixed random recurrent networks with only readouts trained often outperform fully-trained recurrent neural networks. While full supervised training of such networks is problematic, intuitively there should also be something better than a random network. In this contribution we investigate a different approach which is in between the two. We use reservoirs derived from recursive self-organizing maps that are trained in an unsupervised way and later tested by training supervised readouts. This approach enables us to train greedy unsupervised hierarchies of such dynamic reservoirs. We demonstrate in a rigorous way the advantage of using the self-organizing reservoirs over the traditional random ones and using hierarchies of such over single reservoirs with a synthetic handwriting-like temporal pattern recognition dataset.Mantas Lukoseviciusworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/684Tue, 03 Jan 2017 14:04:52 +0100Distortion Invariant Feature Extraction with Echo State Networks
https://opus.jacobs-university.de/frontdoor/index/index/docId/683
In complex pattern recognition tasks data usually exhibits many local distortions which significantly disturb the recognition process. A method for extracting temporal features from a signal that are invariant to these distortions is presented in this report. The idea is to use Echo State Network to generate a rich high-dimensional representation of data. Temporal features are then extracted by finding projections of the high-dimensional representation that are minimally influenced by the selected distortions while still carrying most of the information about the underlying signal required for the performed task. The algorithm performance is analyzed on synthetic signals as well as on high-dimensional handwriting data for shift and scale distortions. It is shown that the algorithm is capable to extract a low dimensional feature set from a reservoir which is invariant to the selected distortions and relevant to the performed task.Vytenis Sakenasworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/683Tue, 03 Jan 2017 12:39:42 +0100Identification of time-frequency localized operators
https://opus.jacobs-university.de/frontdoor/index/index/docId/682
We consider identification of operator families defined via a time-frequency series expansion of the operator spreading function. The identification problem is transformed into an infinite-dimensional linear algebra problem. Our aim is to establish a connection between the identifiability of the operator family and a density measure of the time-frequency index set. In this way, the identification problem can be compared to the classical density condition for existence of Gabor frames. The conclusion is that the relationship between identifiability of such operator families and the critical density is highly intricate because of the presence of additional conditions. Criteria for identifiability are developed for families of time-frequency localized operators defined via time-frequency series expansions of the spreading function based on the Gaussian function.Niklas Grip; Götz E. Pfander; Peter Rashkovworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/682Tue, 03 Jan 2017 12:34:30 +0100Window design for multivariate Gabor frames on lattices
https://opus.jacobs-university.de/frontdoor/index/index/docId/681
Constructive design of Gabor frame windows is rare, and most results come from the one-dimensional case. The connection between the geometry of fundamental domains of lattices and Gabor systems was explored first in a series of papers by Han and Wang [HW01], [HW04]. We build upon these results to construct Gabor frames with smooth and compactly supported window functions in higher dimensions. For this purpose we study pairs of lattices with equal density allowing compact and star-shaped fundamental domains. Concrete examples are provided and the results are extended to other special class of lattices. In addition, we make observations on the intricate behavior of Gabor systems with multivariate Gaussian windows.Götz E. Pfander; Peter Rashkovworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/681Tue, 03 Jan 2017 12:26:02 +0100Object-Oriented or Object-Relational? An Experience Report from a High-Complexity, Long-Term Case Study
https://opus.jacobs-university.de/frontdoor/index/index/docId/680
The discussion between object-oriented and object-relational DBMS technology seems to be decided since some time, in favor of the second candidate. We unroll this question based on our experience with the design and implementation of the array DBMS rasdaman which offers storage and query language retrieval on large, multi-dimensional arrays such as 2-D remote sensing imagery and 4-D atmospheric simulation results. This information category is sufficiently far from both relational tuples and object-oriented pointer networks to achieve a "fair" comparison where no approach has an immediate advantage.
The rasdaman system is implemented in a strictly object-oriented manner. We discuss rasdaman on model, interface, and implementation level and contrast our experience with concepts and concrete systems of object-relational technology. To underpin and justify rasdaman design decisions we also present rasdaman performance results. The rasdaman system is in operational use and available in open source, so our results can easily be reproduced.Peter Baumannworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/680Tue, 03 Jan 2017 12:20:45 +0100A Comparative Analysis of Array Models for Databases
https://opus.jacobs-university.de/frontdoor/index/index/docId/679
While the database collection types set, list, and record have received in-depth attention, the fourth type, array, is still far from being integrated into database modeling. Due to this lack of attention there is only insufficient array support by today's database technology. This is surprising given that large, multi-dimensional arrays have manifold practical applications in earth sciences (such as remote sensing and climate modeling), life sciences (such as microarray data and human brain imagery), and many more areas. Consequently, flexible retrieval today is supported on metadata, but not on the observation and simulation data themselves.
To overcome this, large, multi-dimensional arrays as first-class database citizens have been studied by various groups worldwide. Several formalisms and languages tailored for use in array databases have been proposed and more or less completely implemented, sometimes even in operational use.
In the attempt towards a consolidation of the field we compare four important array models, AQL, AML, ARRAY ALGEBRA, and RAM. As it turns out, ARRAY ALGEBRA is capable of expressing all other models, and additionally offers functionality not present in the other models. We show this by mapping all approaches to ARRAY ALGEBRA. This establishes a common representation suitable for comparison and allows us discussing the commonalities and differences found. Finally, we show feasibility of conceptual array models for describing optimization and architecture.Peter Baumann; Sönke Holstenworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/679Tue, 03 Jan 2017 12:08:30 +0100Trace element analyses in geological materials using low resolution inductively coupled plasma mass spectrometry (ICPMS)
https://opus.jacobs-university.de/frontdoor/index/index/docId/678
The benefits of inductively coupled plasma mass spectrometry (ICPMS) for geochemical studies include excellent instrument sensitivity for determining a large number of elements (<40) with high precision, and the ability to analyze numerous samples relatively quickly (minutes per sample). This study describes the ICPMS methods employed within the Jacobs University Bremen (JUB) Geochemistry Lab for determining the concentrations of 32 trace metals in different geological materials. The quality of the analytical data is discussed in the context of repeated analyses of certified reference materials (CRMs) commonly used in geochemical research.
The average analytical precision of the concentration measurements for all rock types discussed, reported as percent relative standard deviation, is better than ±3% for eleven elements (Ti, Co, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Pb), between ±3-5% for sixteen elements (Sc, Ni, Rb, Nb, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, Th, U), and between ±5-8% for only five elements (Mo, Cs, Hf, Ta, W). The analytical accuracy of the data, determined from comparisons between measured elemental concentrations in various CRMs with published reference values, is considered excellent for 31 of 32 elements, with the exception being accurate measurements of Ti at high concentrations in the analytical solution (>10 mg/kg). However, some elements suffer from interferences due to the major element content of particular rock types, and are unlikely to be quantifiable when present at low concentrations. This includes Co in carbonate rocks, and Nb in Fe-rich rock types. The concentrations of many trace metals in some CRMs are poorly constrained, and early published datasets appear to frequently overestimate the abundances of these elements.Brian W. Alexanderworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/678Tue, 03 Jan 2017 12:02:32 +0100Notes on an Entropy-like Method for Robust Parameter Identification
https://opus.jacobs-university.de/frontdoor/index/index/docId/677
This reports describes the basic ideas behind a novel parameter identification algorithm exhibiting high robustness with respect to outlying data. The algorithm consists in minimizing an entropy-like cost function of the identification residuals. Robustness to outliers is obtained as a consequence of the fact that the used cost function rewards unevenly distributed residuals rather than some kind of weighted mean square error (MSE). In particular residuals are normalized to the MSE and the minimization of the devised entropy-like function rewards the presence of a majority of low relative errors and a minority of large ones, rather than a least MSE that tends to level out residuals hence often hiding outliers. A preliminary theoretical analysis of the algorithm is reported together with experimental results to demonstrate the method. This work builds on ideas initially elaborated in 1999 and extended only in the Summer of 2006 during a research visit at the IUB International University Bremen (now Jacobs University Bremen gGmbH). The author is deeply grateful to Prof. Herbert Jaeger of the Jacobs University Bremen for his precious technical help and for having made this work possible.Giovanni Indiveriworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/677Tue, 03 Jan 2017 11:58:29 +0100Input-Output OOMs
https://opus.jacobs-university.de/frontdoor/index/index/docId/676
Input-output OOMs (IO-OOMs) are an extension of the basic OOM theory to deal with controlled stochastic processes. They were first presented in a comprehensive OOM tutorial text, where an early version of the OOM learning algorithm was partially transferred. Since then, OOM research has focussed on basic OOMs, in particular on developing statistically efficient learning algorithms, and research on IO-OOMs has been dormant. In this report it is shown how the main theorems for OOMs can be transferred to the case of IO-OOMs and how one of the current OOM learning algorithms can be adapted to learning IO-OOMs, yielding the first complete IO-OOM learning algorithm.Michael Thonworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/676Tue, 03 Jan 2017 11:52:29 +0100A Mathematical Semantic Web
https://opus.jacobs-university.de/frontdoor/index/index/docId/675
The body of mathematical knowledge is rapidly increasing and constantly changing. Zentralblatt MATH, an abstracting and reviewing service in the field of mathematics, maintains a database of more than 1.6 million mathematical documents and reports an annual growth by 80,000 articles. Similarly, the open internet archive for for electronic preprints of scientific papers, arXiv.org, contains close to half a million documents. These figures suggest that neither a mathematician's memory nor the time she devotes to studying new publications can possibly suffice to cover a significant fraction of the accumulated wealth of mathematical knowledge. In addition to the exponential increase in the amount of information, one also observes an increase in the complexity of mathematical content with more interdependencies between different areas within and beyond mathematics.
We propose a Mathematical Semantic Web to support mathematicians in efficiently managing and retrieving mathematical knowledge using computer systems and the internet. As with the World Wide Web, the Mathematical Semantic Web will allow authors of mathematics to publish their documents online which cumulate to a gigantic, decentralized and dynamic mathematical knowledge base. Authors semantically annotate their work in a special logical formalism, namely Description Logics, to allow computers to understand the actual knowledge contained therein. Based on these annotations, computer agents reason about the mathematical knowledge and provide novel services on the Mathematical Semantic Web, giving mathematicians efficient access to vast repositories of mathematics.
This thesis first analyzes the utility of Description Logics for formalizing mathematical knowledge. This analysis concludes with the proposal of a Mathematical Semantic Web which is introduced in great detail subsequently. We elaborate on the architecture and individual building blocks before describing the authoring process for the Mathematical Semantic Web. To motivate the value of our proposal, some services operating on the Mathematical Semantic Web are specified. In an effort to improve knowledge retrieval even further, we introduce the combination of domain and structural semantics for reasoning processes, thereby effectively leveraging additional knowledge.
Finally, we provide a list of Best Practices which aim at simplifying the knowledge modeling process and improving the quality of the resulting knowledge base. The Best Practices have been distilled from the experience gained during our case studies.Matthias Bröchelerworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/675Tue, 03 Jan 2017 11:45:43 +0100A Comparative Benchmark of Large Objects in Relational Databases
https://opus.jacobs-university.de/frontdoor/index/index/docId/674
Originally Binary Large Objects (BLOBs) in databases were conceived as a means to capture any large data (whatever large meant at the time of writing) which, for whatever reason, cannot or should not be modeled relationally. Today we find images, movies, XML, formatted documents, and many more data types stored in database BLOBs. A particular challenge obviously is moving such large units of data as fast as possible, hence performance benchmarks are of interest.
However, while extensive evaluations have been undertaken for a variety of SQL workloads, BLOBs have not been the target of thorough benchmarking up to now. TPC and SPC-2 standards do not address BLOB benchmarking either.
We present a comparative BLOB benchmark of the leading commercial and open-source systems available under Unix/Linux. Commercial DBMSs are anonymised, open-source DBMSs benchmarked are PostgreSQL and MySQL.
Measurements show large differences between the systems under test, depending on various parameters. A surprising result is that overall the open-source DBMSs in most situations outperform commercial systems if configured wisely.Sorin Stancu-Mara; Peter Baumann; Vladislav Marinovworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/674Tue, 03 Jan 2017 11:31:04 +0100Support size conditions for time-frequency representations on finite Abelian groups
https://opus.jacobs-university.de/frontdoor/index/index/docId/673
The uncertainty principle for functions on finite Abelian groups provides us with lower bounds on the cardinality of the support of Fourier transforms of functions of small support. We discuss novel results in this realm and generalize these to obtain results relating the support sizes of functions and their short-time Fourier transforms. We then apply these results to construct a class of equal norm tight Gabor frames that are maximally robust to erasures. We discuss consequences of our findings to the theory of recovering and storing signals with sparse time-frequency representations.Felix Krahmer; Götz E. Pfander; Peter Rashkovworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/673Tue, 03 Jan 2017 11:22:27 +0100Time Varying Narrowband Communications Channels: Analysis and Implementation
https://opus.jacobs-university.de/frontdoor/index/index/docId/672
We derive and describe a Matlab implementation of an efficient numerical algorithm for the analysis of certain classes of Hilbert-Schmidt operators that naturally occur in models of wireless radio and sonar communications channels.
A common short-time model of these channels writes the channel output as a weighted superposition of time- and frequency shifted copies of the transmitted signal, where the weight function is often called the spreading function of the channel operator.
It is often believed that a good channel model must allow for spreading functions containing Dirac delta distributions. However, we show that many narrow-band finite lifelength channels such as wireless radio communications can be well modelled by smooth and compactly supported spreading functions.
We derive a fast algorithm for computing the matrix representation of such operators with respect to well time-frequency localized Gabor bases, such as pulse shaped OFDM bases. Hereby we use a minimum of approximations, simplifications, and assumptions on the channel. The primary intended target application is comparisons of how different parameter settings and pulse shapes affect the diagonalization properties of an OFDM system acting on a given channel.
Finally, we provide and demonstrate the use of a Matlab implementation that is fast enough for the number of carrier frequencies typically in use today.
A shortened and improved version of the main results in this report is published in the journal paper [GP07], which excludes some discussions, proofs and the software documentation contained here.Niklas Grip; Götz E. Pfanderworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/672Tue, 03 Jan 2017 11:04:15 +0100Overview of Reservoir Recipes
https://opus.jacobs-university.de/frontdoor/index/index/docId/667
Echo State Networks (ESNs) and Liquid State Machines (LSMs) introduced a simple new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, made RNNs accessible for practical applications as never before and outperformed classical fully trained RNNs in many tasks. The latter, however, does not imply that random reservoirs are optimal, but rather that adequate training methods for them are yet to be developed. Thus much of the current research in reservoir computing is done on reservoir adaptation, redefining the paradigm as using different methods for training the reservoir and the readout. This report motivates the new definition of the paradigm and surveys the reservoir generation/adaptation techniques, offering a natural conceptual classification which transcends boundaries of the current "brand-names" of reservoir methods. The survey focuses more on methods relevant to practical applications of RNNs rather than modeling biological brains.Mantas Lukosevicius; Herbert Jaegerworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/667Mon, 02 Jan 2017 17:40:42 +0100Web Coverage Processing Service (WCPS) Implementation Specification
https://opus.jacobs-university.de/frontdoor/index/index/docId/666
The Open GeoSpatial Consortium (OGC) is the main driving force for open, interoperable service interfaces for geospatial information. To serve this mission, OGC develops and maintains a family of modular standards. The historically main standard for multi-dimensional raster data is the Web Coverage Service (WCS) Implementation Specification which defines basic access functionality on server-stored raster data (aka "coverages").
During the development of WCS several requests have came up to add this or that processing functionality. It was decided, however, to keep WCS basic in its function- ality and rather have a separate, WCS-relate d standard which allows users to compose their own functions without complexity restrictions – in short: to develop a standard offering a raster processing language.
The pertaining specification is the Web Coverage Processing Service (WCPS) Implementation Specification which is currently under development within OGC, by a working group initiated and led by Jacobs University Bremen. WCPS provides access to original or derived sets of geospatial coverage information, in forms that are useful for client-side rendering, input into scientific models, and other client applications. As such, WCPS includes WCS functionality and extends it with an expression language to form requests of arbitrary complexity allowing, e.g., multi-valued coverage results.
The official status of WCPS within OGC is that of a Best Practice document (document no. 06-035r1), ie., an official re commendation by OGC. The report on hand presents an advanced WCPS specification version, which is considered complete and consistent enough to present it for public discussion and, finally, decision on becoming an accepted OGC standard. Peter Baumann; Georgi Chulkovworkingpaperhttps://opus.jacobs-university.de/frontdoor/index/index/docId/666Mon, 02 Jan 2017 17:36:14 +0100