Efficient Algorithms, Automata and Data Structures

The annual workshop of the project APVV-15-0091

The workshop is organized as a part of the project **Efficient algorithms, automata and data
structures** (APVV-15-0091).
The mail goal of the workshop is to present current research results of the project team and to
learn about the newest trends in research
areas related to the project thanks to invited international speakers.

The main topics of the workshop EAADS 2018:

- Algorithms on graphs (matchings, networks, colorings)
- State and descriptional complexity
- Indoor navigation and localization

The aim of the project is the research in the area of efficient algorithms for finite state automata, graph invariants and data analysis. More precisely, the goals are to obtain tighter bound for the state complexity of the conversion between various types of automata and devices with bounded stack, precise complexity of regular operations in binary and unary case, a detailed description of various complexity aspects of generalized graph invariants like vertex cover, independent set, maximum matching, that have applications in cryptography, medicine, organization of education and health care, image analysis and segmentation etc., better understanding of concept hierarchy in formal concept analysis and efficient algorithms for finding bonds between fuzzy contexts.

The project is supported by The Slovak Research and Development Agency.

Slovakia

Participants

Invited Speakers

Institute of Economics

Hungarian Academy of Sciences

Budapest, Hungary

Institute of Economics

Hungarian Academy of Sciences

Budapest, Hungary

Department of Algorithms And Their Applications

Eötvös Loránd University

Budapest, Hungary

Department of Operations Research

Eötvös Loránd University

Budapest, Hungary

Institute of Computer Science

Silesian University

Opava, Czech Republic

School of Computing Science

University of Glasgow

Glasgow, UK

Department of Software and Systems Engineering

Technical University of Munich

Munich, Germany

Institute of Computer Science

University of Wrocław

Wrocław, Poland

During the workshop there will be plenty of space for active collaboration and discussions about research topics and problems related to topics of the project.

In the first part of the talk we present history and overview of the descriptional complexity of languages. In its the second part we will deal with descriptional complexity of colonies and their relation to the state complexity of automata.

Determining the location of a user in a building without GPS availability is a basic requirement for a functional application for indoor navigation. In the presentation, we will introduce a comprehensive solution for locating the user in the building using a smartphone with noisy sensors. We will describe the Advanced Point-Mass method as an implementation of the Bayesian filtering and its application to the problem of indoor localization. This method in general achieves better results of the user position estimation compared to grid-based and particle filters. Moreover, the current trends in this field and the experience of indoor localization competition in a shopping mall will be presented.

I will give a presentation of our framework for indoor navigation in various indoor environments that follows an infrastructure independent approach. We achieve this by the efficient fusion following the particle filter approach of the inertial motion unit based localization method, the user context and map information extracted by existing open information, such as the Open Street Map (OSM), which provides data about many public buildings, but with a rough and often inaccurate representation of the indoor environment (e.g. corridors or stairs without width). Our algorithm uses one order of magnitude less particles than state-of-the-art algorithms. We evaluated all the components of algorithm in real time in off-the-shelf smartphones and we find that our algorithm performs a median error of 2.3m.

We allocate objects to agents as exemplified primarily by school choice. Welfare judgments of the object-allocating agency are encoded as edge weights in the acceptability graph. In this way, the welfare of an allocation is the sum of its weights. We introduce the constrained welfare-maximizing solution, which is given by the allocation of highest welfare among the Pareto-efficient allocations. From a computational point of view, we identify conditions under which this solution is easily determined. For the general, unrestricted case, we formulate an integer program. We find that this is a viable option in practice, solving a real-world instance quickly. Incentives to report preferences truthfully is discussed briefly.

We study the complexity of a particular proportional division problem. On one hand, we offer an efficient protocol and on the other hand, we prove a lower bound on the complexity, as well. The robustness of our approach is illustrated by the fact that it can be extended to a more general division model in a straightforward way. We also present a method that guarantees proportional division in case of demands with irrational ratio. Joint work with Agnes Cseh.

The Černý conjecture says that every synchronizing deterministic finite
automaton has a reset threshold at most *(n-1) ^{2}*,
where n is the number of states.
It is now one of the most longstanding open problems in automata theory.
Since the famous work of Jan Černý from 1964, the topic of synchronizing
automata has attracted many researches and a number of questions arose
around the central problem.
Many interesting results emerged, though the best general upper bound on
the reset threshold is still cubic.
In the talk, I will introduce the area and give my view on the
state-of-the-art concerning the general question: what reset threshold
can have a synchronizing automaton? I will survey selected results from
the last few years, in particular about bounding
the reset threshold, avoiding words, and constructions of synchronizing
automata, concluding with open questions and further perspectives.

We solve an open problem concerning the computational
complexity of the Frobenius monoid problem for a finite language.
The problem was originally posed by Shallit and Xu in 2008.
We show that determining whether for a finite language *L* represented
by a DFA, RE, or NFA, the language *L ^{*}* contains all except a finite
number of words, is PSPACE-complete. This also holds for the case of a
binary alphabet. As an auxiliary question, we introduce the Cyclic
Configuration problem, asking whether for a given Turing machine there
exists any configuration from which the machine cycles. This is a joint
work with Marek Szykuła.

The presentation seeks to introduce a fast algorithm for computation of interpolating spline curve derivatives over uniform grids with C^{2} class smoothness.
The algorithm breaks down the classical de Boor's computational task to systems of equations with reduced size and simple remainder explicit formulas.
While the classic algorithm and the new one are numerically equivalent, the latter is up to three times faster.

In the classical Hospitals / Residents problem (HR), we have a set of residents and a set of hospitals, where each resident ranks in strict order of preference a subset of the hospitals (and vice versa). Each hospital has a capacity, indicating the maximum number of residents that can be assigned to it. A solution is a stable matching M comprising mutually acceptable resident-hospital pairs, in which each resident has at most one hospital, no hospital exceeds its capacity, and there is no resident-hospital pair who would prefer to be assigned to one another than to remain with their existing assignments in M (if any). In 1962 Gale and Shapley described a linear-time algorithm to find a stable matching in an instance of HR.

In this talk we focus on variants of HR that are motivated by practical applications, each of which leads to NP-hard problems, in contrast to the complexity of the classical case. Variants that we consider involve (i) trading off size with stability, where we seek larger matchings that minimise instability, (ii) allowing ties in the preference lists, (iii) allowing couples to apply jointly to pairs of hospitals that are typically geographically close, (iv) allowing hospitals to have lower quotas as well as upper quotas, (v) considering a restricted form of stability called “social stability”, where only acquainted pairs can block a matching, and (vi) allowing some pairs to be forced and/or forbidden in a given matching. In each case we motivate and define the problem, survey existing algorithmic results and outline some open cases.

This is joint work with Georgios Askalidis, Péter Biró, Ágnes Cseh, Tamás Fleiner, Nicole Immorlica, Rob Irving, Augustine Kwanashie, Iain McBride, Eric McDermid, Shubham Mittal, Emmanouil Pountourakis and James Trimble.

The no-show paradox is a disturbing property of some apportionment methods: all parties receive the same number of votes, except for one party that achieves more votes than before and yet this party gets less parliamentary seats. To determine the number of seats in a parliament after elections, two main classes of methods are used: the divisor methods and the quota methods. It is known that the divisor methods are monotone and so do not suffer from the no-show paradox either. We formulate a condition that eliminates the no-show paradox from a quota method and for other quota methods quantify the occurence of this paradox.

We investigate the state complexity of languages resulting from
the cut operation of two regular languages represented by minimal
deterministic finite automata with *m* and *n* states.
We show that the entire range of complexities, up to the known upper bound,
can be produced in the case when the input alphabet has at least two symbols.
Moreover, we prove that in the unary case, only complexities
up to *2m-1* and between *n* and *m+n-2* can be produced, while
if *2m ≤ n-1*, then the complexities from *2m* up to *n-1* cannot be produced.

Membrane systems or P systems were introduced by Gheorghe Păun in 1998 with the aim of modelling the architecture and functioning of living cells and tissues in terms of distributed computing systems. Since that time membrane computing has become an active, successful area of natural computing.

A standard membrane system consists of a membrane structure (a virtual graph, usually a hierarchical arrangement of membranes) and finite multisets of objects which can be found in the compartments located at the nodes of the graph. The objects represent biochemical ingredients, the membrane structure describes the inner structure of the cells or the tissues. Each compartment is associated with a finite set of rules that prescribes how the objects and the membrane structure can change (evolve) and the objects can be communicated to the (neighbouring) compartments and the environment of the P system. The first model was given by a tree-like structure and symbol-objects. Since that time, a lot of other variants of P systems have been introduced and studied.

During the years, an essential part of the research in membrane computing has been devoted to the computational power of different variants of P systems, with special emphasis put on their descriptional complexity aspects. It has been shown that several types of membrane systems with a very small number of compartments, objects, rules, or other constituents are very powerful computing devices, even as powerful as Turing machines.
In this talk, we discuss size and state complexity aspects of some important variants of P systems (catalytic P systems, generalized communicating P systems, and P colonies) and demonstrate that these types of membrane systems even with very low complexity are computationally complete.

Our input is a complete graph G=(V,E) on n vertices where each vertex has a strict ranking of all other vertices in G. Our goal is to construct a matching in G that is popular. A matching M is popular if M does not lose a head-to-head election against any matching M', where each vertex casts a vote for the matching in {M,M'} where it gets assigned a better partner. The popular matching problem is to decide whether a popular matching exists or not. The popular matching problem in G is easy to solve for odd n. Surprisingly, the problem becomes NP-hard for even n. Joint work with T. Kavitha.

We study the classical, two-sided stable marriage problem under pairwise preferences. In the most general setting, agents are allowed to express their preferences as comparisons of any two of their edges and they also have the right to declare a draw or even withdraw from such a comparison. This freedom is then gradually restricted as we specify six stages of orderedness in the preferences, ending with the classical case of strictly ordered lists. We study all cases occurring when combining the three known notions of stability---weak, strong and super-stability---under the assumption that each side of the bipartite market obtains one of the six degrees of orderedness. By designing three polynomial algorithms and two NP-completeness proofs we determine the complexity of all cases not yet known, and thus give an exact boundary in terms of preference structure between tractable and intractable cases.

Research discussions in workgroups together with invited speakers.

Research discussions in workgroups together with invited speakers.

Institute of Computer Science

Faculty of Science

P. J. Šafárik University in Košice

Jesenná 5

040 01 Košice

Slovakia