1 Introduction - SMU



Location Leveling

1 Introduction[1]

Location Based Services (LBS) is an emerging application of mobile data services [K06, SDK01a, XZL06]. Using LBS, users with location-aware Mobile Units (MUs) can query about their surroundings anywhere and anytime through Wireless Service Providers (WSPs) and Information Content Providers (ICPs). However, unlike traditional data information services built on wired networks, to successfully deploy such services requires more than just adding delivery functionality and scalability on the top of the supporting infrastructure. While this ubiquitous computing paradigm provides a great convenience for information access, its constraints have posed a variety of challenges in query processing of location-based services [XZL06]. Moreover as a growing number of traditional systems are adapted to become wireless, systems become more and more complex. Middleware solutions have been proven to be a great benefit in incorporation of delivery features with ubiquitous system platforms, system extensibility, targeted wireless devices, back-end legacy systems, data storage, and workflow needs. In this paper we explore a middleware solution to query processing in location-based services.

Processing a Location Dependent Query (LDQ) [SDK01a, SDK01b, SD02] is the key to the LBS. Individuals using MUs, including laptops and PDAs, may access data located at ICPs using database queries. The responses to the mobile queries depend on the location of the user. The LDQ query processing is complicated by the possible heterogeneous location representations and mismatch of the location granularities between WSPs and ICPs. This so called Location Granularity Mismatch may negatively affect location dependent queries (LDQ), and consequently LBSs. Consider the LDQ “Find the closest restaurant” in a LBS. There are at least two data accesses needed to complete this query. In the first access, the location of the MU is obtained from the LoCation Service (LCS) at the WSP. Then the location value is bound to the original query and a Location Aware Query (LAQ) “Find the restaurants in centered at ” is formed. The LAQ is then sent through wireless delivery channels to ICPs. The ICP returns a response of “A list of restaurants in centered at ”. A more detailed discussion of LDQ processing can be found in the literature [SDK01a, SDK01b, SD02]. Here two issues are raised beyond simply data delivery. First, the location representations of WSP and ICP may be heterogeneous and mismatched. Examples are CellID provided by WSPs and street address by ICPs. Direct translation to a common representation such as longitude/latitude pair has been proposed as a solution in previous work. However this method is not sufficient when legacy systems are incorporated. When direct translation is applied, the common representation will need to be indexed for efficient search. But it is obviously not an optimal solution. Also the inherent restriction of longitude/latitude pair does not allow topologic information, such as river in-between, in mountain, and the 3rd floor in the mall. Secondly the granularities of location representations may not be identical. CellID and address may not be equivalent in location granularity. If the term “closest” is interpreted to be within five miles, the query could be interpreted to be “within 5 miles” of a specific latitude/longitude or cell or zip code and the responses to the query would be different.

Thus a reconciliation solution must be made for this location mismatch for location dependent query processing. This paper presents a data model, Location Leveling (LL,) that solves the problem in a general pervasive mobile computing environment as a middleware conciliation solution.

We use an architectural model originally presented in [SDK01a] and shown in Figure 1. A middleware component called Location Dependent Services Manager (LDSM) is responsible for conciliating mismatches of the query, determining where to process the query, and compiling results from multiple ICPs prior to returning them to the user at the MU [MUs request mobile queries through the wireless network provided by WSPs. The queries are processed at fixed hosts, ICPs. A single query may be processed at multiple hosts and if requested several times could be executed at different sites. Here an ICP refers to any type of information service providers including legacy systems. We make no restrictions or assumptions on the capabilities of content providers. Also we support the cases that one query may be processed by multiple content providers. For example, a query: “Find the closest restaurants and hotels” may be processed by two different content providers. Each could use a different location granularity. We do not assume that an ICP is capable of providing the data at any specific granularity. Similarly, not every LCS at WSP side can provide all granularities the content providers support in their database.

This paper is organized as follows. We examine the location mismatch problem in detail in Section 2. Related work is surveyed in Section 3. The location leveling data model is presented in Section 4. The basic LL algorithms and comparisons to existing approaches are given in Section 5. Finally the work is summarized in Section 6.

[pic]

2 Problem Definition

In this section we give formal definitions of the related concepts and the problem. We call the location value assigned by a LoCation Service (LCS) the Binding Location (BL), while the location value used in the content database the Data Location (DL). The BL and DL can take different representations and granularities. Location Leveling is responsible for location “transformation” from the BL to DL for each query. The results of the query and their precisions will be affected by the location granularity of the data stored and the transformation technique used. As an example, we use a query: “Find all hotels within five miles of ‘John’ ”. If the operator “within five miles” was interpreted to be a circular area with a radius of five miles centered at ’John’, then a different set of hotels would be given from those if it were interpreted to be rectangular. Moreover, given the same query, if the BL were at a latitude/longitude, a smaller set of hotels would be returned than those if it were a cell or zip code. The actual results depend on granularities of both sides. These issues are solely caused by location dependent queries in ubiquitous environment.

[pic]

To conciliate the location mismatch in LDQ processing, we propose a six-step location transformation, as illustrated in Figure 2. We will see that these steps are sufficiently general and flexible for a general ubiquitous environment. We briefly introduce the steps as follows:

• Step 1. Issue a query: A mobile or stationary user issues a query when he is actually at Point A. We denote this location at which the query is issued via the MU client application the Query Location (QL).

• Step 2. Determine location dependency: Location dependency is determined by the type of the operators used, such as “closest” and “straight ahead”. If the query is location dependent, then location binding is required. Otherwise location binding is not needed and the query is transferred directly to the service provider.

• Step 3. Location Binding: LCS is queried about the location of the MU at this step. Location binding is the process of replacing the abstract QL location value with a more precise physical location called the Binding Location (BL). Any location representation is possibly used. The latitude/longitude granularity with a value PointB, or in CellID granularity with a value area1 are typical examples. BL may be different from QL.

• Step 4. Spatial Operator Interpretation: The location granularity is dependent on the interpretation of the spatial operator used in the query (such as “nearest”, “within a 5 mile radius”, etc.). . This step, then, converts the BL into a location area within which the query results are to be found. We call this location the Interpreted Location (IL). The query is no longer dependent on the location of the MU and then becomes a location aware query [SDK01b].

• Step 5. Location Leveling: Any location mismatch is resolved at this step. When the Data Location (DL) granularity is different from area2’s location granularity, Location Leveling is performed.

• Step 6. Query Submission: At this step, the modified query after LL is ready to be submitted to the related content data providers.

In Figure 2, location granularities may appear in four different levels: Query Location, Binding Location, spatial operator Interpreted Location, and the Data Location. The step of location binding may sometimes be performed in cooperation with services provided by the location based services. As an example, query can be: “Find the hotels near where I will be in one hour”. Here, the BL is a projection of a future location based on movement direction and speed of the user. The location services would have to find the current location, and predict future location using sophisticated prediction techniques. This additional prediction process may not be performed by the location services but rather by the LDSM middleware. Although this extra work should not impact the actual location granularity, it could change the precise location to which the MU is associated.

Many different types of spatial operators may be used in these queries: distance based (within five miles), overlap (within my movement path), straight ahead, etc. When the spatial operator is examined, it may be required to change the BL. For simplicity, in this paper we usually assume the IL = BL.

We conclude this section with an example that highlights the various locations and issues presented.

Example 1. This example illustrates the process of the six step location transformation using two different BLs and two different operators. The two BLs are latitute/longitude pair and zipcode, and the two spatial operators are closest and straight ahead. We assume the “closest” operator is interpreted as a circular area with radius 5 miles from PointB, and the “straight ahead” operator is interpreted as a rectangular area with length 5 miles (longest edge of the rectangle is 5 miles) in the direction of movement. Table 1 summarizes the different location granularities used in each stage of the query translation process.

Suppose, we have a workspace W, that is partitioned into 5 zipcodes, {Z1, Z2, Z3, Z4, Z5}, and two states, {S1, S2}. For a query “Find the closest/straight ahead Hotels within 5 miles”, the four combinations of query interpretation using two different location bindings and two operator interpretations are shown in Figures 3 a), b), c) and d). The closest operator is interpreted as a circular region around the BL. For the latitude/longitude binding this is a circle and for the zipcode binding this is a region 5 miles around the zipcode Z5. The straight ahead operator results in a rectangular region in the area of movement for the latitude/longitude binding, but for the zipcode binding this is the area in front of the movement direction 5 miles around the Zipcode Z5. As the content provider provides information at the state level, the intersection of the IL with states gives an answer of hotels in state S1 for latitude/longitude binding, and those in S1 or S2 for zipcode binding.

[pic]

[pic]

3 Related Work

In this section we present related work in the areas of location dependent queries, location translation, and location modeling as found in previous literature.

3.1 Location Dependent Queries

We have examined the overall process and proposed an architecture to support the processing of location dependent queries [SDK01b, SDK01a]. In the work we differentiated 7 different types of location queries. Location Aware Queries (LAQ) results dependent on location but are independent of the user’s current location. An example of a LAQ would be: “Find all Italian restaurants in Dallas” Compare this to a LDQ: “Find the closest Italian restaurants” A LAQ does not require that the MU be bound to a location. Therefore, LAQs do not require any location translation or leveling process at all. A LDQ differs from the others in that a Location Related (LR) operator (such as near, closest) may be used. In addition, these queries do not specifically have a location attribute. An implied location, that to which the query is bound, is included.

Another related type of mobile query is referred to as a Moving Object Database Query (MODQ) [SWCD97, WXCJ98, Wol02]. These types of queries assume that not only is the query originator moving, but so is the object of which the query is being asked. An example of a MODQ is: “Find the closest automobiles manufactured by GMC” Here the automobiles are moving as is the MU from which the query is being requested. In fact the MODQ can be viewed as a generalized LDQ. Three types of special MODQs have been identified in [SWCD97]: instantaneous, continuous, and persistent. These may or may not actually be LDQs. However, the location leveling process would be needed by all of these types of queries. A major difference between LDQs and MODQs is that a location binding may have to be performed on both the MU from which the query is being asked as well as the object being queried. In the above example, the target automobiles will have to be bound to a location as will the MU. In this example, no LL is needed assuming that the same binding process and location granularity are used.

MOD Queries are part of the ongoing research in Moving Object Databases (MOD) [Wol02]. Not only is examining queries in the domain important, but so are many other aspects. Research areas include location modeling, trajectory prediction and usage, and new operators for queries [Wol02, TWHC04, XW03]. The trajectory management project is crucial for processing not only MOD queries but also LDQs. An accurate prediction of the future location of an MU is needed to bind the query to a future location. Although there is much work in the area of MODs we are not aware of any work in the translation of location granularities. Thus not only is much of the MOD research applicable to LDQs, but much of our LL work is applicable to processing MODQs.

3.2 Location Translation

Geocoding is an automated process that is used in Geographic Information Systems (GIS) to translate the implicit references, such as addresses, to explicit geographic references, such as latitude/longitude [EG03]. “Geocoding is the process of assigning geographic coordinates (latitude/longitude) to data” [Map00]. Typical geocoding algorithms parse a street address into several components and then use these components to find matches or near matches in a database [Map00]. We view Geocoding as a special case of location leveling.

The term location translation has been previously used to describe this leveling process [Sig00]. However, translation provides very limited capabilities. The possible set of locations at both the query and content provider sides is limited and known a priori. The translations are well defined and either hard coded or described by a precise mapping function. We have seen this approach used by SignalSoft Corp in its “” translations, which includes cell to latitude/longitude, latitude/longitude to zones, zones to URL, and latitude/longitude to common description (address) [Sig00]. With Location Leveling, the different granularities are not known a priori. Thus, the Location Leveling process must not only determine the BL and DL, but also figure out how to translate the query from the BL to DL. In addition, one LDQ could be decomposed into multiple queries to be executed at multiple content provider sites. Thus, multiple DLs may be used. As our approach is general and applicable to any granularities, the translation can be viewed as a specialized subset of the capabilities provided by location leveling algorithms.

3.3 Modeling

Leonhardt [Leo98] proposed a location model for a Location Service. The geometric - “coordinate system” representation of the locations and the symbolic - “names and their relationships” representation are integrated into his model as Semi-symbolic. Higher levels of abstraction are viewed as symbolic (names and relationships) and the lower levels are viewed in a geometric representation. We envision a similar location hierarchy independent of any location service. As described in [Leo98], location concepts are partially ordered by the “contains” relationship, reflecting the spatial inclusion of the associated geographical areas, and how they may overlap. We thus use a Location Directed Acyclic Graph (DAG), to represent the location concepts and their relationships.

Jose [Jos01] has indicated that location concept relationships can be either “Containment” or “Adjacency”. Containment is a unidirectional relation between location concepts, whereas adjacency is the immediate physical proximity. Jose [Jos01] assumes no partial containment in location hierarchies in integration of different location sources in a service based framework. We define both total and partial containment relationships in our work.

Location hierarchies are also considered concept hierarchies. Concept hierarchies are data or application specific since they define mapping rules between different levels of concepts. In the Concept Hierarchies categorization presented by Lu [Lu97], Schema type is defined as a partial order to reflect relationships among attributes. Therefore, one can also see the Location DAG as a “Schematic Concept Hierarchy”.

A query processing model that uses context hierarchies and summary databases is presented in [MMR98]. The model is designed for a mobile computing environment to facilitate the disconnectivity, low bandwidth and limited storage space problems. The paper also mentions a query conversion process to perform the summarization of the data that resembles our query translation steps in the LDSM. However, the work only considers the generalization or moving to a higher level concept in the graph location hierarchy and thus the model can only represent coarser granularities.

Modeling the movement of objects using multiple granularities has been seen in [HE02]. Both spatial and temporal granularities are considered. They specifically look at an individual’s geospatial lifeline which can be viewed as a history of locations and corresponding timestamps where this individual has been. Different granularities can result due to different frequencies of sampling an individual’s location and the speed at which the location changes. More frequent sampling results in more detail and finer granules. The granularity of an individual’s lifeline can be refined or coarsened. The authors formally examine these issues, however, no algorithms to do the conversions are provided.

Location hierarchies have been seen as the aggregation levels in multidimensional data modeling. Aggregation in Multidimensional Data modeling can be considered the conversion from a fine level concept to a coarser level concept [AGS97]. From the coarser concept accessing the finer concepts brings the Specialization to the conversion process. Drill-down operation displays detail information for the aggregation levels that is for the specialization [AGS97]. Aggregation semantics, granularity usage and imprecision in multidimensional data were defined in Pedersen’s studies [PJD99, Ped00, PJD01]. The partial containment relationship in multidimensional data has also drawn attention in the database community. A spatiotemporal data model has been studied in [KRSO01], where the semantics related to granularities are used in a conceptual model. A survey of multidimensional data models is presented in [ASS01].

Wang et al [WJS93] studied the time granularity mismatch for federated temporal databases. The authors described temporal granularities as residing in a complete lattice with respect to finer than relation and defined temporal modules for the information conversions. They defined the Least Upper Bound (LUB) and Greatest Lower Bound (GLB) granularities to resolve time mismatches by “Up” and “Down” conversions respectively. This work is good example of granularity mismatch. However the time related mismatch is 1-dimensional and spatial or spatiotemporal mismatch we are going to propose is multi-dimensional.

Dyreson et al [DSF95, DELS00] have used multiple time hierarchies for temporal conversions in a spatiotemporal database. The operator scale converts an instant from one time granularity to another granularity. On the other hand, the cast operator returns a determinate finer time instant from an indeterminate instant, that might be an interval, i.e., it expresses any time granularity in terms of the other. Cost functions are also presented and single-source shortest path and all source shortest path algorithms are used to evaluate the minimum cost. If there is any direct function for the conversion, it is used. For indirect, or incompatible conversions, a conversion of the form “first Down-coarser to finer granularity then Up-finer to coarser granularity conversion”, that is called a [pic] conversion produces correct results [DELS00]. As the authors indicate, however, it is possible that paths other than [pic] paths are correct. Likewise, with location hierarchies, the most cost effective path type need not be a [pic] path. However, a [pic] path will always provide the best (smallest) results.

Jensen et al. proposed a relational algebraic framework for multidimensional data modeling for LBS [JKPT02, JKT04]. Multidimensional context mismatches found in LBS were investigated. A key contribution of this work was to define transitivity of containment relationship in LBS. It concluded with that a partial containment in multidimensional data could result in imprecision in data transformation. This fact is indeed true in any conciliation solutions. A pessimistic solution was proposed based on the degree of containment in order to promote precision. However, our approach will consider both precisions and recalls at the same time because the precision is really not the only requirement for query results.

4 Location Leveling Data Model

In this section we propose our location leveling software approach. As was defined in [JKT04] our model is composed of concepts, values, and relationships.

4.1 Location Granularity Concepts

We assume that a geographical space is divided into several areas, either contiguous or noncontiguous, and names or labels are assigned depending on the context. For example, a country can be divided into distinct regions, such as States, and for each of these regions there is a corresponding attribute value for its name, (i.e. State). The location concepts and the relations between them imply a Location Hierarchy. An example of location hierarchy is represented by a graph, as shown in Figure 4, where the nodes are location concepts and the edges are the relationships between them. The location hierarchy has an implied orientation with finer concepts at the bottom of the graph and coarser ones at the time. The finer/coarser concepts are defined by a domain expert.

[pic]

Definition 1. A Location Hierarchy, H, is a set of Location Granularity Concepts or Types, L, and a set of defined mappings or relationships, M, between the location concepts, i.e., H= { L,M }.

Definition 2. A Location Granularity Concept represents a type of partitioning of a geographical area and may be viewed as an attribute name in the database schema. The set of all Location Concepts defined in the hierarchy is denoted by L = {L0, L1, ..., LN}. The concept L0 =ALL represents all defined geography, where for all Li ∈ L, Li ⊆ ALL}.

The set L = {ALL, UNION, COUNTRY, STATE, REGION, COUNTY, CITY, ZIPCODE, CELLID, LAT/LONG} is defined for the hierarchy in Figure 4. The concept ALL at the top of the DAG is coarser than all other concepts, while the concept LAT/LONG is the finest among all. The domain expert may indicate that a concept is finer than another concept even when the subset relationship does not always hold. For example, LCELLID < LCITY based on the location hierarchy graph, but it could be possible that a specific CELLID would actually span several cities. This finer/coarser relationship is defined by the expert based on his general perception of the normal orderings among the concepts. It is reflected by the edges in the hierarchy DAG as seen in the Figure. If Li is neither finer nor coarser than Lj, then Li and Lj are said to be incomparable.

Based on topological relationships between two spatial objects, the order can be seen as a total or partial containment. Following the lead in [JKT04] we use [pic]T to represent the total containment relationship and [pic]P as the partial containment. With Total Containment the subset relationship always holds among the specific values for the corresponding concepts. States are totally contained in Countries without any overlap. However, in cellular coverage network geography, a CellID may extend into more than one City, so that CellID can spatially overlap with two cities. In Partial Containment, not every existing location granularity value, lvir ∈ LVi is in a containment relationship with an element lvjs ∈ LVj.

Definition 3. A relation from a Location Granularity Concept Li to a coarser Concept Lj, is called Total Containment, if for all lvir ∈ LVi, there exists a lvjs ∈ LVj , such that, lvir ⊆ lvjs. Total Containment relationship is denoted as Li [pic]T Lj and is read as “Li is totally contained in Lj”.

Definition 4. A relation from a Location Granularity Concept Li to a coarser Concept Lj is called Partial Containment, if for some lvir ∈ LVi, there exists a lvjs ∈ LVj , such that, lvir ∩ lvjs is not empty. Partial Containment is denoted as Li [pic]P Lj.

The finest Location Concept is a Point represented by a pair of coordinate values defined as Latitude/Longitude (Lat/Long). However, it is not possible to represent all Lat/Long values. Instead, it is better to view the geography under consideration as composed of grids that can be defined as primitive concepts. For a given Lat/Long value, it is possible to find its corresponding Grid. Hence, we assume that another Location Granularity Concept Grid is added to the Location Hierarchy to provide a way to represent the location granularities similar to raster data representation in spatial data management. A Grid can be of any size depending on the intended detail. The primitive concepts, i.e. grids, can further be subdivided into sub-grids forming a hierarchy. With a grid-like structure, all granularities can be represented with a rectangular area.

As shown in [JKT04] [pic]T is a partial order. In addition it is shown that [pic]P is antireflexive, antisymmetric, and transitive. Assuming only one dimension type (location), based on Jensen’s work we can then define the Location Dimension or Location Hierarchy as shown in Definition 1 as: H = (L, [pic]T , [pic]P , ALL, LAT/LONG). Here we assume that ∀C∈L, LAT/LONG [pic]T C [pic]T ALL. We use [pic] when the exact type of relationship is not known (total or partial) but it is known that some relationship exists. Table 2 shows the transitivity of Location Granularity Concepts.

[pic]

Location Hierarchies can be represented by directed graphs. In a Location Graph, nodes are location granularities and edges are the relationships from a set of finer granularities to coarse level granularities. A Location DAG - LDAG is a rooted directed acyclic graph, where LDAG is a triple (V , E, ALL) in which V is the vertex (or node) set of LDAG, E is the edge set of LDAG and ALL is the root vertex (node) of LDAG. The elements of V are either Location Concepts or Values and the elements of E are the Relationships (Mappings) between the vertices (nodes). When the conceptual location names, Location Granularity Concepts, are the vertices of an LDAG, it is a Concept LDAG (CLDAG).

Definition 5. A Concept LDAG (CLDAG) is a Location DAG, where the vertices are Location Granularity Concepts, V = {L1,…, Ln}, and the edges define the general relationships between them. Each edge is assigned a relationship type R, where R = {T, P}, Total Containment and Partial Containment respectively.

Figure 5 shows the Concept LDAG for a location hierarchy. Any edge in a CLDAG is a relationship from one vertex to another, and is defined as a tuple (u, v, t), where u and v are vertices, is an edge and t is is either T or P, for Total or Partial Containment types, respectively. Notice that this figure does not show all relationships. Additional ones are implied based on the transitivity relationships shown in Table 2. The CLDAG need only show the minimum number of relationships. However, in reality when LL is performed, we assume that each relationship (arc) shown in a CLDAG indicates that a corresponding translation algorithm exists to convert between the two connected values.

Note that if there are multiple location hierarchies defined by different parties, they can be combined under the same root. A relation may be defined between the finest granularity of one hierarchy and the default finest granularity Grid. The smaller grid of the two hierarchies can be chosen. It is then possible to define multiple location hierarchies by mapping the finest and coarsest level granularities of the main hierarchy, which are Grid and ALL respectively. Multiple location hierarchy usage can be considered same as the multicalendar usage discussed in [DELS00].

[pic]

4.2 Location Granularity Values

Each location concept has associated location values or instances. The domain of the attribute name will be the set of location values. For example, for LState, “Texas” is a value from the State domain. We consider only set-valued domains of attributes for location concepts. For example, Location Granularity Values for the concept “City” are from the set CityNames = {Arlington, Irving, Dallas, Plano, Allen, ...}.

Definition 6. The set of location identifiers corresponding to each distinct region or a point for the particular Location Granularity Concept Li are called Location Granularity Values of Li, and denoted by LVi. LVi is a set of values from the related location domain, LVi = {lvi1, lvi2, ..., lvik}, where k is the number of values in LVi.

A Workspace, W, is a two-dimensional, rectangular area represented by upper right corner and lower left corner coordinate point values in a Cartesian coordinate system. When we restrict the location values to a particular workspace W, it will be represented as LViW . Example 2 shows the usage. We drop W from the notation where the workspace is understood.

[pic]

Example 2. Suppose W is the rectangular area that geographically covers State = “TX” (for Texas state) in Country = “United States”, i.e., W = “TX”. For [pic], location values for County concept are distinct, since the values are unique within the workspace, TX. Also, there is no overlap in the geographical-spatial view. A sample map of “Texas” is shown in Figure 6, that consists of 20 regions and 255 counties as defined by Texas Department of Transportation [DoT01]. Location Granularity Values are defined as; [pic]= {lvCounty1, lvCounty2, ..., lvCounty255}, and real values are namely, {“Anderson”, “Andrews”, ..., “Dallas”, “Denton”, ..., “ Rockwall”, ..., “Zavala”}.

Containment relationships between concepts induce similar relationships between their instances (values). In the Total Containment relationship between two concepts, Li [pic]T Lj , location values that belong to the finer concept are 100% contained in the unique coarser location value. However, in the Partial Containment relationship, the containment percentage may be less than 100%. Following the lead in [JKPT02, JKT04], we use d-measure to indicate the degree of containment. Thus when we write lvir [pic]1 lvjs we mean that lvir is 100% contained in lvjs. The d-measure is an estimation of the containment percentage for a pair of location values and may be different for each pair. In this paper we use [pic]T and [pic]P to identify the relationships at the concept level. We use [pic]d to indicate the relationship at the value level. When an estimate of the actual degree of containment is known, d is replaced with the percentage estimate.

Definition 7. In Total or Partial Containment, where Li [pic]T Lj or Li [pic]P Lj respectively, individual value relationships are defined by Containment Percentages, d-measures. In Total Containment, d-measure is one (d = 1.00), (i.e. ∀lvir ∈ Li ∃lvjs∈ Lj such that lvir ⊆ lvjs) lvir [pic]1.0 lvjs. In Partial Containment, lvir [pic]d lvjs defines the d% of lvir that is contained in lvjs.

Suppose we have two spatial partitions for the same workspace shown in Figure 7. The following examples further illustrate the use of the d-measure.

[pic]

Example 3. Figure 7 shows examples of concept values for the CLDAG in Figure 4. Figure 5 has Lat/Long [pic]T Grid, and Grid [pic]P Zipcode. In Figure 7 (a), we can see (Lat/Long = (x, y)) [pic]1.00 (Grid = 4). However, Grid 4 is partially contained in both Zipcode 1 and Zipcode 4, where the relations are denoted as (Grid = 4) [pic]0.45 (Zipcode = 1) and (Grid = 4) [pic]0.55 (Zipcode = 4). (x, y) can be either in Zipcode 1 or Zipcode 4, then we say (x, y) is either in Zipcode 1 with a 45% probability or in Zipcode 4 with a 55% probability.

Even though the [pic]P relationship is transitive at the concept level, we can not directly apply transitivity to the [pic]d relationship at the value level. We can certainly say that (ei [pic]di ej) ^(ej [pic]dj ek) implies that (ei [pic]d ek) for some d, but what value should be used for the d? We view the d-measure as only an estimate of the possible containment percentage among the values. It is not a minimum nor is it a maximum. This is a major difference between our approach and that in previous work. In [JKT04] the authors assume that the “d” value is the minimum amount. They indicate that “ei [pic]d ej” indicates that “ej is larger than or equal to d times the size of ei.” [JKT04, p7]. They thus assume the following:

[pic]

However, we assume an estimate only. Thus in our work we take the following approach:

[pic]

Jensen’s approach is conservative and pessimistic when compares to ours. Their view of the d-measure is that it is a minimum containment. However in reality we would not usually know the minimum amount of a CellId which would be contained in a city. Notice that at best, in the real world we can only hope for an estimate. Thus we take a more realistic approach and assume that the d-measure is only an estimate, not the minimum that could be contained. In the LDSM middleware software architecture, these estimates will be used to estimate space requirements thus an expected value seems to be the best way to go. A more theoretical paper could examine the differences between the pessimistic (Jensens’s approach), optimistic, and expected (our approach) values for containment in transitivity.

Example 4. From the location hierarchy in Figure 5, we have, Grid [pic]P CellID, and CellID [pic]P County. Transitivity of subset relations implies Grid [pic]P County. In Figure 7 (b), only 80% of Cell 8 and 70% of Cell 7 data are shown in the workspace. 5% of the Grid 4 is contained in Cell 8, i.e., (Grid = 4) [pic]0.05 (CellID = 8), and 80% of the Cell 8 is contained in County 1, i.e., (CellID = 8) [pic]0.8 (County = 1). As a result, we can only say that some part of Grid 4 may be contained in County 1. With at least a probability of 80% of 5%, i.e., (0.05 * 0.8 = 0.04) 4% of Grid 4 is contained in County 1, but we do not exactly know that data are in it. That relation may be denoted as (Grid = 4) [pic]0.04 (County = 1).

We further examine the difference in d-measure between our approach and that proposed by Jensen. Looking at Example 4, with our approach (Grid = 4) [pic]0.04 (County = 1). With Jensen’s approach this would be (Grid = 4) [pic]0 (County = 1) since it is not really known how much of Grid 4 is contained in County 1, and this is the minimum value that can be guaranteed to be in that county. Notice that it appears about half of Grid 4 is actually in County 1. We thus see that neither approach is precise. In fact it is impossible to get a precise value. We therefore think an estimate of the containment, neither a minimum nor maximum, is the most logical. In terms of the location leveling algorithm itself, the d-measure is not even used. In practical usage either or a hybrid combination of them could be used. A Location DAG is also used to represent location values.

Definition 8. The Location Granularity Values for the concepts in CLDAG are represented in a Value LDAG (VLDAG). Elements of the set LVi valid for the corresponding Location Granularity Concept Li are the vertices (nodes) of the graph for that level. The edges of the graph define the individual relationships and d-measures are specified in between the values.

[pic]

The edges for a VLDAG are defined as (Valuei, Valuej, Weightij , Weightji). Here we have a estimate of the containment in both directions: [pic],[pic]. Continuing Example 4, the tuple (CellId, County, P) is an edge of the CLDAG, whereas tuples (Cell 8, County 1, 0.8, 0.32) and (Cell 7, County 2, 0.70, 0.30) are the edges of the VLDAG. Using transitivity of the relations, if there exists (u, v, tuv, tvu) and (v, w, tvw, twv), then we might have (u, w, tuw, twu). Here, tuw is estimated to be (tuv*tvw) for the “contained in” relationship in VLDAG and twu is estimated to be twv*tvu. Dotted line in Figure 8 shows the transitive d-measures.

4.3 Location Granularity System

A given CLDAG Graph can be viewed differently by each content provider. The per content provider CLDAGs can be viewed as logical views of the default CLDAG Graph. Value LDAGs for these will also be defined as logical views. Content providers specific views correspond to external schemas. An example of a CLDAG with two content providers is given in Figure 9. The view for Content Provider 1 is defined as:

(ALL → Country → State → County → CellID → Grid)

and for the Content Provider 2 is as:

(ALL → Country → State → County → Zipcode → Grid).

A default grid structure and the corresponding Concept LDAGs and Value LDAGs should be defined in the middleware, i.e., LDSM. Content providers will then define their Location Related (LR) attributes and the location hierarchy between these. In fact, the relationship tuples must be defined in this manner. LR-Attribute names and the set of valid values for the related context domain should also be defined if they are different than the default values. In addition to the CLDAG, and VLDAG structures, Metadata holds the Data Location pointers per content provider, the applications type, and the routing information. Content providers might also give the Grid ranges for their lowest level of granularity. If the content providers need a finer level grid structure, using the tools provided, they will subdivide the grids and define the related dependencies.

[pic]

Definition 9. A Location Granularity System, is defined as a set LGS = {L, LV, [pic], [pic]d },Location Granularity Concepts, L, their corresponding Location Granularity Values, LV, the relationship between concepts, [pic], and the individual granularity value relationships, [pic]d.

5 Location Leveling

Both CLDAG and VLDAG graphs should exist for Location Leveling. Location Granularity Concepts of BL and DL can be found in CLDAG structure, whereas the BL and DL values can be found in the corresponding VLDAG structure. In fact, the Concept LDAG corresponds to term intension, i.e., the database schema, and the Value LDAG is an extension of the schema where the values show the database state [EN00]. We can find the traversal path from the CLDAG and then substitute the values from the corresponding VLDAG subgraph.

During conversion, the CLDAG should first be traversed to find the relationship between BL and DL. Then, the corresponding values from the VLDAG are assigned to BL and DL values. A subgraph of the Value LDAG is shown in Figure 10. If there is an individual value relationship, [pic]d, then there is an edge in VLDAG. Example 5 shows the corresponding translation steps using the CLDAG and VLDAG. The VLDAG may not actually be materialized. It could be represented by translation tables or algorithms for each arc in the corresponding CLDAG.

Example 5. Suppose we have the LDQ as: “Find the closest theaters”. This query is formulated using conventional relational algebra [EN00]:

πTheaterName,Address σ(closest(Address,Here) (Theaters)

and the user’s current location is given by the location service vendor as :

BL = {CellID, 21}

then we will have the query in the form:

πTheaterName,Address σ(closest(Address, "CellID=21") (Theaters).

First, the concept “CellID” is accessed in CLDAG for BL. For the data access, the location granularity is searched in a Metadata database. The data location concept is then located as “City” for DL. Since the two concepts are different, then a reachability path should be found from “CellID” to “City” from CLDAG. From Figure 5 we see that “Upward” conversion is needed. Therefore, VLDAG in Figure 10 is accessed to find the value of BL that is “contained in” each value of “City”. Let us assume CellID 21 is contained only in Dallas city. The corresponding attribute data value is put in the predicate and data location is accessed with

DL = {City, “Dallas”},

then the query becomes :

πTheaterName,Address σ(closest(Address, "City=Dallas") (Theaters)

Here, the content provider doesn’t have to process, or change the query. The query will directly be processed in their database.

[pic]

5.1 Conversion of Location Granularities

In a conversion procedure, the aim is to express the source location granularity in terms of the target location granularity. A conversion from a fine granularity to a coarser granularity is called Upward Conversion (MoveUp), while a conversion from a coarse granularity to a finer granularity is called Downward Conversion (MoveDown). In Upward conversion, individual relationships in Li [pic] Lj are used, whereas in Downward conversion, relationships in Lj [pic] Li are used. Below, we describe the Upward and Downward conversion strategies with examples.

In a conversion, let F be the finer location granularity and G be the coarser location granularity, F [pic]P G. Suppose the value containment relations, [pic]d , are defined with the following pairs only: < Fi, Gi >, < Fi, Gk > and < Fk, Gk >. In an Upward Conversion, from F to G, all location values of G that have a relationship with the individual Fi will be included in the result. Figure 11 shows the granularity F converted to the granularity G in set representation. Since Fi is contained in both Gi and Gk, Gi and Gk will be in the result set. In a Downward Conversion, from G to F, all F values that are contained in G location values (completely are partially) are returned. In Figure 12, for Gk to F conversion, both Fi and Fk are included in the result set. The result will have some data points that do not belong to Gk since Fi is not a subset of Gk. In addition, from the figure it seems there may be many points in Gk that are not returned but note that, from individual containment relations, we have only < Fi, Gk > and < Fk, Gk >. The intersection of the granules can exclude the data in Gk. Thus, partial containment induces that imprecision.

The above examples illustrate conversion between concepts that are related immediately to each other. However, the conversion process can occur across many levels in the concept hierarchy. Two location concepts are not convertible if there is no directed path between them in the CLDAG. It may be possible to construct a path either by first Upward then Downward, Upward-Downward or first Downward then Upward, Downward-Upward, conversions. Since ALL is the upper bound and Grid is the lower bound for all location granularities, it is always possible to find a conversion between the granularities. Although LAT/LONG value is finer than the GRID, to provide a finite mapping, Grid is used as the lower bound for all other granularities. For the conversion, however, there is no need to go all the way up to the coarsest granularity (i.e., ALL) or to the finest granularity (i.e., Grid). Therefore, we should be able to find a Least Upper Bound (LUB) (separate from ALL) or a Greatest Lower Bound (GLB) (separate from GRID) for the two location granularities to convert or compare.

[pic]

[pic]

An LUB of two location granularities is the finest location granularity that is coarser than both. If there is a LUB for the two location concepts, then the conversion path is an Upward-Downward conversion that is a [pic] path [DELS00]. CellID → County ← Zipcode is the [pic] path for CellID- Zipcode conversion and County is the LUB of the two concepts. On the other hand, a GLB of two location granularities is the coarsest location granularity that is finer than both. When GLB is used for the conversion, the path will be a Downward-Upward conversion that is a [pic] path. Therefore, CellID ← Grid → Zipcode is a [pic] path for CellID - Zipcode conversion, where Grid is the GLB of the two concepts. Both [pic] and [pic] paths are indirect paths.

There may be more than one path between two location granularities. If there is a way to compare any two paths, the path with the more precise result should be chosen. Thus, if it is possible to prevent unnecessary operations used in conversion procedure, a more precise result may be produced in return. In this case, there would be no need to find LUB or GLB, since direct relationships are used. All of individual relationships should be included among the relationships in the Location Granularity System definition.

5.2 Location Leveling Algorithm

Location Leveling uses the CLDAG and VLDAG definitions which are assumed to exist in the metadata. This section provides an overview of a basic location leveling algorithm to enable the location granularity conversions using these metadata. Detailed pseudo-code for the entire algorithm can be found at the authors’ Web site (). A flowchart of the Location Leveling Algorithm is given in Figure 13. Keep in mind that in this algorithm, when we look for an LUB, we are looking for a concept finer than ALL and when we look for a GLB, we are looking for a concept coarser than GRID.

Although there have been previous conversion algorithms proposed, ours uses a cost to determine the best conversion path if several are found. A CLDAG is created to indicate the existence of a conversion technique between the granularities, whereas the VLDAG contains the actual value to value relationships (in the form of tables or algorithms). If there is more than one indirect path that exists between incomparable granularities, both paths will be taken and the minimum cost path will be chosen. Here we calculate cost based on the expected size (number of values) of the final result set. The number of location values substituted depends on the branching factor of the location concept.

Branching factor, bfi, can be defined as the number of immediate children of a node. The average number of children of a Location Granularity Concept Li can be used to define an approximate bfi. On the other hand, Reverse Branching factor, rbfi can be defined as the number of immediate parents of a node. The branching factor of a Location Granularity certainly affects the number of queries that can be created for that level.

The Location Leveling algorithm checks if the BL (SourceL) and DL (TargetL) are comparable by searching the ancestors or descendants of the SourceL. If two location concepts are comparable, the direction of the translation is also known. In an upward conversion, the set of data locations is prepared by tracing all parent edges (i.e., “contained in” relations) of the SourceL.Val until TargetL is reached. In case of Total Containment relationship, there is only one parent value for the fine value. However, in Partial Containment, the number of location values that the SourceL. Val is contained in depends on the reverse branching factor rbfSourceL, of the SourceL granularity. In a downward conversion, location granularity values are found by following the children edges (i.e., “contains” relationship) of the SourceL.Val.

Two example traces of the algorithm using Figures 5 and 7 are given below. Let the time to access the individual value from VLDAG be tAC and query creation time be tQC. We show the important steps, but do not show all steps in detail. The algorithm first determines the source and destination granules and starting source value. It then determines if these granularity concepts are comparable by determining is there is a path between them in the CLAG. The actual conversion of the source value to the destination values is then determined by traversing the VLAG. This finally gives the values to be used in the query.

Example 6. Upward Conversion

Leveling: // Convert from CellID to County

SourceL = CellID, SourceL.Val = Cell 8, TargetL = County.

SourceL ≠ TargetL → IsComparable(CellID, County)

IsComparable: // Using CLDAG determine if CellID and County are comparable

Ancestors(SourceL, CLDAG) = {Country, County, City}

PathUp = TRUE, YesComparable = TRUE, Func = “MoveUp”

ConvertQuery: // Using VLDAG determine the precise DL values to use. Also determine cost of query

Access VLDAG where SourceL.Val = Cell 8 (see Figure 7)

Put the only value “County 1” as TargetL.Val in QS

Cost = tAC for (Cell 8) + tAC * 1 + tQC * 1

Leveling:

Return data location values in QS

Example 7. Downward Conversion

Leveling: // Convert from ZipCode to Grid

SourceL = Zipcode, SourceL.Val = Zipcode 1, TargetL = Grid.

SourceL ≠ TargetL → IsComparable(Zipcode, Grid)

IsComparable: // Using VLDAG determine if ZipCode and Grid are comparable

Ancestors(SourceL, CLDAG) = {County},

Descendants(SourceL, CLDAG) = {Grid}

PathDown = TRUE, YesComparable = TRUE, Func = “MoveDown”

ConvertQuery: // Using VLDAG determine the precise DL values to use. Also determine cost of query

Access VLDAG where SourceL.Val = Zipcode 1 (see Figure 7)

Put each of Grid 1, Grid 2, Grid 4, Grid 5 as a TargetL.Val in QS.

Cost= tAC for (Cell 8) + tAC * 4 + tQC * 4

Leveling:

Return data location values in QS

[pic]

5.3 Complexity of LL

Let Li and Lj be two location granularities defined in the CLDAG. The time complexity of the location leveling algorithm is twofold. First, a path should be found in the CLDAG from the source location granularity to target location granularity. A path from Li to Lj can be found using a traditional graph algorithm such as shortest path. The path search cost depends on the number of location concepts which exist in the path, N. Second, the VLDAG is accessed to extract the related data location values. In upward or downward conversions, the cost of processing can then be bounded by the branching factors, rbfi or bfi respectively, of Location Concept i. The branching factor of Li is determined by the number of children that individual LVis have. In an Upward conversion, from Li to Lj, rbfi is a factor to find the location values in the conversion path. For example, if every node lvik has an average of 3 “contained in” relations with the location values of Lj, then rbfi = 3. bfi is the branching factor showing the “contains” relations to the location granule that will be used in the Downward conversion. Path length, lengthst, will determine the number of accesses to the VLDAG. The expected number of queries to be created from s to t is thus:

[pic]

Note that in actuality the individual predicate values may be grouped together in only one query rather than multiple. This formula derives the expected number of predicate values to be created from the one source value input.

To determine the algorithm complexity, we assign the maximum value amongst the branching factors as BF = max(bfi, rbfi), and estimate the number of data location values for the cost. From equation 1, the number of queries may then be considered of complexity O(BFlengthst). The expected complexity of the algorithm that includes both costs (path search and substitution of source values) is then O(N) + O(BFlengthst).

In our approach, not every location concept pair relationship has to be defined. Thus, the space complexity is bounded by the number of relationships, which is O(|LV |). The worst case space for a CLDAG is O(N2) and for a VLDAG is O(|LV |). However, the expected space for our algorithm is much less than the naive scheme where every pair relationship is defined. In the best case, our approach only use O(N) space for CLDAG and O(|LV |) for the VLDAG.

In the Location Leveling approach using the granularity graphs in terms of Location DAGs, the location domain will be large but finite. If we had used the coordinates (lat/long) as the lower bound for all granularities, the number of edges would have been infinite. The lower bound of Grid concept in fact, will provide finiteness to our approach.

5.4 Comparison to Other Approaches

We have defined a general and flexible location leveling algorithm that uses the inherent relationships between location granularities. The approach we have proposed does not require every location concept to have a relationship with every other location concept. However, our approach provides a translation capability from any location granularity to any other granularity.

The theoretical basis for our proposed software architecture is based on previous published research [JKPT02, JKT04, WJS93]. Our work builds on these earlier works by proposing a practical software architecture to utilize the There are two major theoretical differences between our work an earlier ones: 1. We take a more optimistic approach in calculating the percentage of containment in partial containment across multiple levels. We take an expected or average containment value rather than minimum. 2. Our location leveling algorithm uses the branching factor estimates stored with the VLDAG to help estimate the best translation when multiple translations are possible. 3.

In the existing location translation or generic conversion scheme-geocoding approaches, only certain conversion paths are available. Translation tables are provided only for these conversions. Both SignalSoft Corp’s [Sig00] “” software and the geocoding used by MapInfo [Map00] and Mobilaris SE [Mob00] are example of systems that use this basic approach. Figure 14 illustrates the location hierarchy or CLDAGs based on the provided translation relationships for these approaches.

[pic]

The VLDAGs of these two CLDAGs are translation tables provided by each solution provider. In SignalSoft Corp’s translation scheme and in geocoding, only one level conversion can be performed. Using these schemes and our Location Leveling approach not only can the same one level conversion be supported, but it can also use the CLDAG to provide indirect conversions. An indirect conversion is any conversion that requires traversing more than one edge when a direct relationship is not defined. For instance, Location Leveling could provide a conversion between Zone and CellId in SignalSoft’s hierarchy. However, with their approach this would not be possible unless an explicit translation table is defined. SignalSoft only provides the following translations (shown in Figure 14) [Sig00]:

Zone → URL ; Lat/Long → Zone; CellID → Lat/Long; Lat/Long → Address

Mobilaris [Mob00] and MapInfo [Map00] provide only a translation from:

Lat/Long → Address

Location Leveling would support all of the above conversions and the following translations:

Lat/Long → URL; Zone → CellID;URL → CellID; URL → Address

Zone → Address; CellID → Address; URL → Zone; Zone → Lat/Long

Lat/Long → CellID; Address → Lat/Long; URL → Lat/Long; CellID → Zone

CellID → URL; Address → URL; Address → Zone; Address → CellID

A combined CLDAG shown in Figure 15 illustrates that which would be used to support these translations. Notice this is done with no additional data or translations beyond those provided by the individual companies. Therefore, Location Leveling not only provides the same functionality as the other systems, but also provides a powerful approach to accommodate separate location translation techniques and to enable any location granularity to any granularity conversions.

[pic]

6 Summary

We have proposed a systematic methodology that allows the conversion between any two locations when needed to process a Location Dependent Query. Our work has built on earlier theoretical research which examined modeling of multidimensional data [JKT04]. Through the use of a middleware implementation, this LL technique facilitates the processing of a Location Dependent Query at any content provider regardless of the location to which the query is bound.

References

[AGS97] R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. In 13th International Conference on Data Engineering (ICDE ’97), pages 232–243, Birmingham, U.K., April 1997. IEEE Computer Society.

[ASS01] A. Abello, J. Samos, and F. Saltor. A framework for the classification and description of multidimensional data models. In 12th International Workshop on Database and Expert System Applications, DEXA 2001, pages 668–677, Munich, Germany, September 2001. IEEE Computer Society.

[DELS00] C. E. Dyreson, W. S. Evans, H. Lin, and R. T. Snodgrass. Efficiently supporting temporal granularities. IEEE Transactions on Knowledge and Data Engineering, 12(4):568–587, July/August 2000.

[DoT01] Texas Department of Transportation. Texas county highway maps- counties by region. web page, , 2001.

[DSF95] C. Dyreson, R. Snodgrass, and M. Freiman. Efficiently supporting temporal granularities in a dbms. Technical Report TR 95/07, James Cook University of Queensland, Townsville, Queensland, Australia, 1995.

[EG03] ESRI and . About GIS: How GIS Works, Using Geographic Data. Web page, , 2003.

[EN00] R. Elmasri and S. B. Navathe. Fundamentals of Database Systems. Addison Wesley, 3rd edition, 2000.

[HE02] K. Hornsby and M. J. Egenhofer. Modeling moving objects over multiple granularities. Annals of Mathematics and Artificial Intelligence, 36(1-2):177–194, September 2002.

[JKPT02] C.S. Jensen, A. Kligys, T.B. Pedersen, and I. Timko. Multidimensional data modeling for location-based services. In ACM-GIS 2002, 10th ACM International Symposium on Advances in Geographic Information Systems, pages 55–61, McLean, VA, USA, November 2002.

[JKT04] Christian S. Jensen, Augustas Kligys, and Torben Bach Pedersen Igor Timko. Multidimensional data modeling for location-based services. The VLDB Journal, 13(1):1–21, January 2004.

[Jos01] Rui Jose. An Open Architecture for Location-Based Services in Heterogeneous Mobile Environments. PhD thesis, Lancaster University, Computing Department, Lancaster, England, April 2001.

[K06] Vijay Kumar, Mobile Database Systems, Wiley, 2006.

[KRSO01] V. Khatri, S. Ram, R. T. Snodgrass, and G. O’Brien. Supporting user-defined granularities and indeterminacy in a spatiotemporal conceptual model. Technical Report TR-55, A TIMECENTER Technical Report, , 2001.

[Leo98] Ulf Leonhardt. Supporting Location-Awareness in Open Distributed Systems. PhD thesis, University of London, Imperial College of Science, Technology and Medicine, Department of Computing, London, UK, May 1998.

[Lu97] Yijun Lu. Concept hierarchy in data mining: Specification, generation and implementation. Master’s thesis, Simon Fraser University, School of Computing Science, Canada, December 1997.

[Map00] MapInfo. Geocoding with MapInfo Products, MapXtreme Java edition developer’s guide. web page, , 2000.

[MMR98] S.K. Madria, M.K. Mohania, and J.F. Roddick. A query processing model for mobile computing using concept hierarchies and summary databases. In The 5th International Conference of Foundations of Data Organization (FODO’98), pages 147–157, Kobe, Japan, November 1998.

[Mob00] Mobilaris. MobilarisAB-location services. web page, , 2000.

[Ped00] Torben Bach Pedersen. Aspects of Data Modeling and Query Processing for Complex Multidimensional Data. PhD thesis, Aalborg University, Department of Computer Science, Aalborg, Denmark, May 2000.

[PJD99] T. B. Pedersen, C. S. Jensen, and C. E. Dyreson. Supporting imprecision in multidimensional databases using granularities. In International Conference on Scientific and Statistical Database Management, (SSDBM’99), pages 90–101, Cleveland, OH, USA, July 1999.

[PJD01] T. B. Pedersen, C. S. Jensen, and C. E. Dyreson. A foundation for capturing and querying complex multidimensional data. Information Systems, 26(5):383–423, 2001.

[SD02] A. Y. Seydim and M. H. Dunham. A Location Dependent Benchmark with Mobility Behavior. In International Database Engineering and Applications Symposium, IDEAS 2002, Edmonton, CANADA, July 2002.

[SDK01a] A. Y. Seydim, M. H. Dunham, and V. Kumar. An architecture for location dependent query processing. In 4th International Workshop on Mobility in Databases and Distributed Systems (MDDS’01) in 12th International Conference on Database and Expert System Applications (DEXA 2001), Munich, Germany, June 2001.

[SDK01b] A. Y. Seydim, M. H. Dunham, and V. Kumar. Location dependent query processing. In S. Banerjee, P.K. Chrysanthis, and E. Pitoura, editors, Second ACM International Workshop on Data Engineering for Mobile and Wireless Access, MobiDE’01, pages 47–53, Santa Barbara, California, USA, May 2001.

[Sig00] SignalSoft Corp. SignalSoft Corporation - Wireless Location Services. web page, , 2000.

[SWCD97] A. P. Sistla, O. Wolfson, S. Chamberlain, and S. Dao. Modeling and querying moving objects. In 13th International Conference on Data Engineering (ICDE’97), pages 422–432, Birmingham, U.K., April 1997. IEEE Computer Society.

[TWHC04] G. Trajcevski, O. Wolfson, K. Hinrichs, and S. Chamberlain. Managing uncertainty in moving objects databases. ACM Transactions on Database Systems, 29(3), September 2004.

[WJS93] X. S. Wang, S. Jajodia, and V.S. Subrahmanian. Temporal modules: An approach toward federated temporal databases. In ACM 1993 SIGMOD International Conference on Management of Data, pages 227–236, Washington, DC, USA, May 1993.

[Wol02] O. Wolfson. Moving objects information management: The database challenge (vision paper). In Next Generation Information Technologies and Systems, 5th International Workshop, NGITS 2002, Caesarea, Israel, June 24-25, 2002, volume 2382 of Lecture Notes in Computer Science, pages 75–89. Springer, 2002.

[WXCJ98] O. Wolfson, B. Xu, S. Chamberlain, and L. Jiang. Moving objects databases: Issues and solutions. In International Conference on Scientific and Statistical Database Management, (SSDBM’98), pages 111–122, Capri, Italy, July 1998.

[XW03] B. Xu and O. Wolfson. Time-series prediction with applications to traffic and moving objects databases. In Proceedings of the MobiDE Conference, 2003.

[XZL06] Jianliang Xu, Baihua Zheng, and Wang-Chien Lee, "Data Access Techniques for Location-Based Services", 7th International Conference on Mobile Data Management (MDM'06), 2006.

-----------------------

[1] This material is based upon work supported by the National Science Foundation under Grants No. IIS-9979458 and No. IIS-0208741.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download