Archived page!
About   |   Contact   |   Sitemap   |   Feedback

Articles

2013

Towards Building Linguistic Ontology via Cross-Language Matching.

Mamoun Abu Helou, Matteo Palmonari, Mustafa Jarrar, Christiane Fellbaum .
(submitted) The 7th Conference on Global WordNet, in Tartu (Estonia) on 25th until 29th January 2014

 

2012

Arabic Text Correction Using Dynamic Categorized Dictionaries: A Statistical Approach

Adnan Yahya, Ali Salhi
In proceedings of the 4th International Conference on Arabic Language Processing-CITALA 2012; Rabat, Morocco.; 1-2 May 2012.

Abstract: This paper describes a technique for spelling and correcting Arabic text that provides different variables that can be controlled to give customized results based on the properties of the processed text. The proposed technique depends on dynamic dictionaries controlled and customized based on the input text categorization. In the research reported here we employ a statistical/corpus-based approach with data obtained from the Arabic Wikipedia and local Palestinian newspapers. Based on corpus statistics we constructed databases of words and their frequencies as single, double and triple expressions and used that as the infrastructure for our spelling and text correction technique. Our spelling technique builds on earlier work[7], but using new spelling variables and dynamic dictionaries based on categorized texts. We briefly report on the results of preliminary testing and analysis. While the results reported here are promising, they must be viewed as work in progress, still in need of more testing, refining, integration and deployment in real life settings.
Keywords: Natural Language Processing; Arabic Wikipedia; Arabic Text Correction; Categorized Corpus; Text Categorization
@inproceedings{CITALA12 ,
author = {Adnan Yahya, Ali Salhi},
title = {Arabic Text Correction Using Dynamic Categorized Dictionaries: A Statistical Approach},
booktitile={Proceedings of the 4th International Conference on Arabic Language Processing},
year = {2012},
publisher={CITALA},
address={xxx},
month={May},
url = {http://localhost/sina/publications/#CITALA12}
}

2011

Building a Formal Arabic Ontology (Invited Paper)

Mustafa Jarrar
In proceedings of the Experts Meeting on Arabic Ontologies and Semantic Networks. Alecso, Arab League. Tunis, July 26-28, 2011.

Abstract:
Keywords: Arabic Ontology, Arabic Lexical Semantics, Arabic WordNet, WordNet, FrameNet, Gloss, Concept, Arabic Upper Level Ontology, Arabic Top Level Ontology, Arabic Core Ontology
@inproceedings{BFAO11 ,
author = {Mustafa Jarrar},
title = {Building a Formal Arabic Ontology (Invited Paper)},
booktitile={Proceedings of the Experts Meeting on Arabic Ontologies and Semantic Networks},
year = {2011},
publisher={Alecso, Arab League},
address={Tunis},
month={July},
url = {http://localhost/sina/publications/#BFAO11}
}

Ontology-based Data and Process Governance Framework – The Case of e-Government Interoperability in Palestine

Mustafa Jarrar (BZU), Anton Deik (MTIT) and Bilal Farraj (MTIT)
In pre-proceedings of the IFIP International Symposium on Data-Driven Process Discovery and Analysis (SIMPDA’11). Pages(83-98). ISBN 978-88-903120-2-1. Campione, Italy. June 30, 2011.

Abstract: The major challenge when integrating information systems in any domain such as e-Government is the challenge of Interoperability. One can distinguish between three aspects of Interoperability; technical, semantic, and organizational. The technical aspect has been widely tackled especially after the ubiquity of internet technologies. The semantic and organizational aspects deal with sharing the same understanding (semantics) of exchanged information among all applications and services, in addition to modeling and re-engineering governmental processes to facilitate process cooperation that provision seamless e-government services. In this paper, we present the case of the Palestinian Interoperability Framework ‘Zinnar’, which is a use case of using ontology in e-government (i.e., data and process governance) to tackle the issues of semantic and organizational interoperability. The followed methodology resulted in a success story within a very short time and has produced a framework that is intuitive, elegant, and easy to understand and implement.
Keywords: Interoperability, Data Integration, e-Government, Ontology, Data Governance, Process Governance, Business Process Modeling
@inproceedings{ OF11,
author = {Mustafa Jarrar, Anton Deik and Bilal Farraj},
title = {Ontology-based Data and Process Governance Framework – The Case of e-Government Interoperability in Palestine},
booktitile={Pre-proceedings of the IFIP International Symposium on Data-Driven Process Discovery and Analysis (SIMPDA’11)},
year = {2011},
pages={83-98},
publisher={xxx},
address={Campione, Italy},
month={June},
ISBN={978-88-903120-2-1},
url = {http://localhost/sina/publications/#OF11}
}

Tools for Arabic People Names Processing and Retrieval: A Statistical Approach

Ali Salhi, Adnan Yahya
In proceedings of the Arabic Language Technologies International Conference ALTIC2011, Alexandria, Egypt; 10-11 October 2011.

PDFAbstractBibTEXCited By
Abstract: Arabic web content has been rapidly growing, generating a need for tools to overcome the many challenges of processing and retrieving Arabic content: challenges related to Arabic Language Processing, Search and Query Analysis. An important part of dealing with Arabic digital content is processing and analyzing Arabic people names. This paper reports on our work aimed at designing name pre-processing tools that are able to efficiently identify and process Arabic people names in queries and documents. We try to address challenges such as Name Gender Detection, Translation (Arabic to English), Correction, Auto Suggestion and Extraction from text. All through, we employ a statistical approach based on data obtained from High School student names lists in Palestine and Birzeit University student names lists. Based on this information we constructed different types of databases of Arabic names and used them as the infrastructure for the well structured names tools which are capable of being integrated into existing web search engines and document processing systems. We have been experimenting with some of the developed tools in our online application process at Birzeit University, with encouraging preliminary results.
Keywords: Arabic Proper Names, Statistical Databases, Name Correction, Name Translation, Names Gender Detection, Proper Names , Extraction, Natural Language Processing
@inproceedings{TANP11 ,
author = {Ali Salhi, Adnan Yahya},
title = {Tools for Arabic People Names Processing and Retrieval: A Statistical Approach},
booktitile={Proceedings of the Arabic Language Technologies International Conference},
year = {2011},
publisher={Arabic Language Technology Center (ALTEC)},
address={Alexandria, Egypt},
month={Ocotober},
url = {http://localhost/sina/publications/#TANP11}
}

Enhancement Tools for Arabic Web Search : A Statistical Approach

Adnan Yahya, Ali Salhi
In proceedings of the 7th International Conference on Innovations in Information Technology; Abu Dhabi, United Arab Emirates.; 25-27 April 2011.

Abstract: The Arabic web content is growing rapidly and the need for its efficient management is gaining importance and the morphological complexity of Arabic raises many challenges in this regard. This paper reports on some of our work aimed at designing text mining and query pre-processing tools that are able to efficiently process and search large quantities of Arabic web data. In our research we try to address the challenges Arabic poses for natural language processing (NLP) and information retrieval, root extraction, language detection, and Arabic query correction, suggestion and expansion. While not reported in detail here, we are also developing tools for automatic Arabic document categorization. All through, we employ a statistical/Corpus-based approach based on data obtained from a variety of sources. Based on corpus statistics we constructed databases of words and their frequencies as single, double and triple expressions and used that as the infrastructure for the well structured search aid tools that are able to handle the sophisticated nature of Arabic, and capable of being integrated into existing web search engines and document processing systems. We also utilize context analysis and spellchecking of the user queries to enable a more complete and efficient search. While the results reported here are promising, they must be viewed as work in progress, still in need of testing, refining, integration and deployment in real life settings.
Keywords: Arabic Proper Names, Statistical Databases, Name Correction, Name Translation, Names Gender Detection, Proper Names , Extraction, Natural Language Processing
@inproceedings{ETWS11 ,
author = {Adnan Yahya, Ali Salhi},
title = {Enhancement Tools for Arabic Web Search: A Statistical Approach},
booktitile={Proceedings of the 7th International Conference on Innovations in Information Technology},
year = {2011},
publisher={IEEE},
address={xxx},
month={April},
url = {http://localhost/sina/publications/#ETWS11}
}

Guest Editorial: Knowledge Management and e-Human Resources Practices for Innovation

Gianluca Elia, Mustafa Jarrar
The International Journal of Knowledge and Learning (IJKL). Inderscience Publishers. (To Appear).

Abstract:
Keywords: Knowledge Management, HR, KM, profile mapping, Competency, Competencies analysis, Competence ontologies, Interoperability, Roadmaps, for fostering creativity, creativity, serendipity, profiling innovation, complementarity. Learning, e-Human Resources, Linked-Data, Semantic Web, Data Web, Linked Data, Web 3.0, Web 2.0, RDF, SPARQL, Languages, Query-by-Diagram, Mashups, Query Pipelines, Human Factors, Design, Management.
@article{ KM11,
author = {Gianluca Elia and Mustafa Jarrar},
title = {Guest Editorial: Knowledge Management and e-Human Resources Practices for Innovation},
journal={The International Journal of Knowledge and Learning},
year = {2011},
month={xxx},
publisher={INDER SCIENCE},
volume={xxx},
ISSN = {xxx},
url = {http://localhost/sina/publications/#KM11}
}

Guest Editorial: Querying the Data Web -Novel techniques for querying structured data on the web

Paolo Ceravolo, Chengfei Liu, Mustafa Jarrar, Kai-Uwe Sattler
The World Wide Web Journal. Springer. ISSN:1573-1413. (To appear).

Abstract:
Keywords: Linked-Data, Semantic Web, Data Web, Linked Data, Web 3.0, Web 2.0, RDF, SPARQL, Languages, Query-by-Diagram, Mashups, Query Pipelines, Human Factors, Design, Management.
@article{ SI11,
author = {Paolo Ceravolo, Chengfei Liu, Mustafa Jarrar and Kai-Uwe Sattler},
title = {Guest Editorial: Querying the Data Web -Novel techniques for querying structured data on the web},
journal={The World Wide Web Journal.},
year = {2011},
month={xxx},
publisher={ACM},
volume={xxx},
ISSN = {1573-1413},
url = {http://localhost/sina/publications/#SI11}
}

2010

A Query Formulation Language for the Data Web

Mustafa Jarrar, Marios D. Dikaiakos
IEEE Transactions on Knowledge and Data Engineering.IEEE Computer Society. (2010, In Press).

Abstract:We present a query formulation language (called MashQL) in order to easily query and fuse structured data on the web. The main novelty of MashQL is that it allows people with limited IT-skills to explore and query one (or multiple) data sources without prior knowledge about the schema, structure, vocabulary, or any technical details of these sources. More importantly, to be robust and cover most cases in practice, we do not assume that a data source should have -an offline or inline- schema. This poses several language-design and performance complexities that we fundamentally tackle. To illustrate the query formulation power of MashQL, and without loss of generality, we chose the Data Web scenario. We also chose querying RDF, as it is the most primitive data model; hence, MashQL can be similarly used for querying relational databases and XML. We present two implementations of MashQL, an online mashup editor, and a Firefox add-on. The former illustrates how MashQL can be used to query and mash up the Data Web as simple as filtering and piping web feeds; and the Firefox addon illustrates using the browser as a web composer rather than only a navigator. To end, we evaluate MashQL on querying two datasets, DBLP and DBPedia, and show that our indexing techniques allow instant user-interaction.
Keywords:Query Formulation, Semantic Web, Data Web, RDF, SPARQL, Indexing Methods, Query Optimization, Mashup, Linked-Data, Semantic Web, Data Web, Linked Data, Web 3.0, Web 2.0, RDF, SPARQL, Languages, Query-by-Diagram, Mashups, Query Pipelines, Human Factors, Design, Management.
@article{ QF10,
author = {Mustafa Jarrar and Marios D. Dikaiakos},
title = {A Query Formulation Language for the Data Web},
journal={IEEE Transactions on Knowledge and Data Engineering.},
year = {2010},
month={xxx},
publisher={IEEE Computer Society},
volume={xxx},
ISSN={xxx},
url = {http://localhost/sina/publications/#QF10}
}

Towards Query Optimization for the Data Web – Disk-based algorithms: Trace Equivalence and Bisimilarity

Ala’ Hawash, Anton Deik, Bilal Farraj, Mustafa Jarrar
In Proceedings of the International Conference on Intelligent Semantic Web – Services and Applications. Amman, Jordan. Pages 131 – 137. ISSN: 2218-1504. June 2010.

Abstract: Companies, Communities, Research Labs, and even Governments are all competing on publishing structured data in the web in many forms such as RDF and XML. Many Datasets are now being published and linked together, including Wikipedia, Yago, DBLP, IEEE, IBM, Flickr, and US and UK government data. Most of these datasets are published in RDF which is a graph-based data model. However, querying RDF graphs is a major problem which has brought the attention of the research community. Among the many approaches proposed to tune up the performance of queries over data graphs, a number of them proposed to summarize RDF graphs for query optimization; instead of querying a dataset, queries are executed over the summary of the dataset. In order to summarize a dataset, two well known algorithms are being used, namely, Trace Equivalence and Bisimilarity. Nevertheless, these are memory based and thus suffer from scalability problems because of the limitations imposed by the memory. In this paper, we propose disk-based versions of those memory-based algorithms and we adapt them to RDF data. Our proposed algorithms are experimented on relatively large datasets and using different sizes of memory to prove that they are indeed disk based.
Keywords: Semantic/Data Web, WEB 3.0, RDF, Query Optimization, Scalability, Trace Equivalence, Bisimilarity.
@inproceedings{ TQ10,
author = {Ala’ Hawash, Anton Deik, Bilal Farraj and Mustafa Jarrar},
title = {Towards Query Optimization for the Data Web – Disk-based algorithms: Trace Equivalence and Bisimilarity},
booktitle = {ISWSA 2010 Conference},
year = {2010},
pages = {131-137},
publisher = {Isra University},
address = {Amman, Jordan},
month = {June},
ISSN = {2218-1504},
url = {http://localhost/sina/publications/#TQ10}
}

Mapping ORM into OWL2

Rami Hodrob, Mustafa Jarrar
In Proceedings of the International Conference on Intelligent Semantic Web – Services and Applications. Amman, Jordan. Pages 68-73. ISSN: 2218-1504. June 2010.

Abstract:The goal of this article is to map between Object Role Modeling (ORM) and Ontology Web Language 2 (OWL 2 DL). This mapping allows one to graphically develop his/her ontology using the ORM notation, while the ORM is automatically translated into OWL 2 DL. We map the most commonly used rules of ORM into OWL 2 DL which have the ability of decidability. DogmaModeler is extended to perform automatically this mapping (ORM into OWL 2 DL). Mapping technique is assessed using desirable reasoning methodology which depends on RacerPro2 reasoner .
Keywords:Ontology, Object Role Modeling, Web Ontology Language 2 (OWL 2 DL), SHOIN Description Logic.
@inproceedings{ MO10,
author = {Rami Hodrob and Mustafa Jarrar},
title = {Mapping ORM into OWL2},
booktitle = {ISWSA 2010 Conference},
year = {2010},
pages = {68-73},
publisher = {Isra University},
address = {Amman, Jordan},
month = {June},
ISSN = {2218-1504},
url = {http://localhost/sina/publications/#MO10}
}

Querying the Data Web – the MashQL Approach

Mustafa Jarrar, Marios D. Dikaiakos
IEEE Internet Computing. Volume 14, No. 3. Pages (58-670). IEEE Computer Society, ISSN 1089-7801. May 2010.

Abstract: MashQL, a novel query formulation language for querying and mashing up structured data on the Web, doesn’t require users to know the queried data’s structure or the data itself to adhere to a schema. In this article, the authors address MashQL’s challenges as a language (as opposed to an interface) in assuming data to be schema-free. In particular, they propose and evaluate a novel technique for optimizing queries over large data sets to allow instant user interaction.
Keywords: Query Formulation, Semantic Web, Data Web, RDF, SPARQL, Indexing Methods, Query Optimization, Mashup, Linked-Data, Semantic Web, Data Web, Linked Data, Web 3.0, Web 2.0, RDF, SPARQL, Languages, Query-by-Diagram, Mashups, Query Pipelines, Human Factors, Design, Management.
@article{ QT10,
author = {Mustafa Jarrar and Marios D. Dikaiakos},
title = {Querying the Data Web – the MashQL Approach},
journal = {IEEE Internet Computing},
year = {2010},
publisher={IEEE Computer Society},
volume = {14},
number = {3},
pages = {58-670},
month = {May},
ISSN = {1089-7801},
url = {http://localhost/sina/publications/#QT10}
}

Towards a Methodology for Building Ontologies -Classify by Properties. (in Arabic)

Jamal Daher, Mustafa Jarrar
In proceedings of the 3rd Palestinian International Conference on Computer and Information Technology (PICCIT 2010). Hebron, Palestine. March 2010.

Abstract: The Internet and the open connectivity environment created a strong demand for the sharing of data semantics. Ontologies are increasingly becoming essential for computer science applications. Organizations are beginning to view them as useful machine-process semantics for many application areas, such as ecommerce, bioinformation, software and data engineering. Ontology engineering is a shared understanding of a certain domain, represented formally in a computer resource. By sharing an ontology, autonomous and distributed applicator can meaningfully communicate to exchange data and make transactions interoperate independently of their internal technologies. The meaning in an ontology is typically specified through the sub and the super type of a certain concept/class. As we will discuss and explain in this article, this way of specifying the meaning of a certain thing (using sub/super types) is indeed difficult and complex. This because one has to investigate whether these sub/super types are true in reality, not only for applications at hand. In this article we propose a new methodology of specifying the meaning of a certain thing. In other words, instead of classifying arbitrarily, we propose to use properties as classifiers. We claim that our methodology is easier to use and it leads to more ontological consistency.
@inproceedings{ NM10,
author = {Jamal Daher and Mustafa Jarrar},
title = {Towards a Methodology for Building Ontologies -Classify by Properties. (in Arabic)},
year = {2010},
address = {Hebron, Palestine},
month = {March},
url = {http://localhost/sina/publications/#NM10}
}

Towards Query Optimization for the Data Web

Anton Deik, Bilal Farraj, Ala’ Hawash, Mustafa Jarrar
In proceedings of the 3rd Palestinian International Conference on Computer and Information Technology (PICCIT 2010). Hebron, Palestine. March 2010.

Abstract: The amount of structured data in the web is growing rapidly. For example, Google, Yahoo, Freebase, Upcoming, eBay, Flickr, and many more are competing on offering their data in a structured format such as XML, RDF, and CSV. To exploit the full potential of this massive amount of structured data, MashQL was introduced as a query formulation language for the data web and specifically for RDF data sources. Querying large RDF datasets is indeed of high complexity. Not only MashQL, but several other approaches have been proposed to tune up the performance of queries over large RDF datasets. Many of these approaches propose to summarize large RDF datasets for query optimization purposes. In order to summarize a dataset (which is typically represented as a graph), two well known algorithms are being used, namely, Trace Equivalence and Bisimilarity. In other words, query optimization over large graph datasets is being done by summarizing large data graphs using those algorithms. The idea of optimization here is that instead of querying a dataset, queries are executed over the summary of this dataset. Nevertheless, these algorithms are memory based; that is, datasets are loaded to memory and the algorithms are executed there. As a consequence, they suffer from scalability problems because of the limitations imposed by the memory. In this paper, we propose two disk-based versions of those two memory-based algorithms. Our proposed algorithms are experimented on relatively large datasets, and on different sizes of memory.
Keywords: Semantic\Data Web, WEB 3.0, MashQL, RDF, Query Optimization, Trace Equivalence, Bisimilarity.
@inproceedings{ TQO10,
author = {Anton Deik, Bilal Farraj, Ala’ Hawash and Mustafa Jarrar},
title = {Towards Query Optimization for the Data Web},
year = {2010},
address = {Hebron, Palestine},
month = {March},
url = {http://localhost/sina/publications/#TQO10}
}

2005

Automated Reasoning, Knowledge Representation and Management

Peter Baumgartner, Ulrich Furbach, and Adnan H. Yahya
German Journal of Artificial Intelligence- Kuenstliche Intelligenz -KI, 1:5-11, 2005.

Abstract: This overview discusses the connection between two subdisciplines of AI, Automated Reasoning and Knowledge Representation. The state of the art in Automated Reasoning is briefly indicated and its relation to Logic Programming and Knowledge Representation is presented. The issue of Knowledge Management is addressed via a case study.
@article{ AR05,
author = {Peter Baumgartner, Ulrich Furbach, and Adnan H. Yahya},
title = {Automated Reasoning, Knowledge Representation and Management},
journal = {KI – K?nstliche Intelligenz},
year = {2005},
volume = {1},
pages = {5-11},
ISSN = {0933-1875},
url = {http://localhost/sina/publications/#AR05}
}

2003

A Relevance Restriction Strategy for Automated Deduction

David Plaisted and Adnan Yahya
Journal of Artificial Intelligence (AI) Volume 144. Issue 1-2 Pages 59-93 (2003).

Abstract: Identifying relevant clauses before attempting a proof may lead to more efficient automated theorem proving. Relevance is here defined relative to a given set of clauses S and one or more distinguished sets of support T. The role of a set of support T can be played by the negation of the theorem to be proved or the query to be answered in S which gives the refutation search goal orientation. The concept of relevance distance between two clauses C and D of S is defined using various metrics based on the properties of paths connecting C to D. This concept is extended to define relevance distance between a clause and a set (or multiple sets) of support. Informally, the relevance distance reflects how closely two clauses are related. The relevance distance to one or more support sets is used to compute a relevance set R, a subset of S that is unsatisfiable if and only if S is unsatisfiable. R is computed as the set of clauses of S at distance less than n from one or more support sets; if n is sufficiently large then R is unsatisfiable if S is. If R is much smaller than S, a refutation from R may be obtainable in much less time than a refutation from S. R must be efficiently computable to achieve an overall efficiency improvement. Different relevance metrics are defined, characterized and related. The tradeoffs between the amount of effort invested in computing a relevance set and the resulting gains in finding a refutation are addressed. Relevance sets may be utilized with arbitrary complete theorem proving strategies in a completeness-preserving manner. The potential of the advanced relevance techniques for various applications of theorem proving is discussed.
Keywords: Relevance, Relevance metrics, Theorem proving, Sorted inference
@article{ RS03,
author = {David Plaisted and Adnan Yahya},
title = {A Relevance Restriction Strategy for Automated Deduction},
journal = {Journal of Artificial Intelligence},
year = {2003},
volume = {144},
number = {1-2},
pages = {59-93},
month = {March},
url = {http://localhost/sina/publications/#RS03}
}

SATCHMOREBID: SATCHMO(RE) with BIDirectional Relevancy

Donald Loveland and Adnan Yahya
Journal of New Generation Computing. Volume 21, Number 3. Pages 177-207 (2003).

Abstract: SATCHMORE was introduced as a mechanism to integrate relevancy testing with the model-generation theorem prover SATCHMO. This made it possible to avoid invoking some clauses that appear in no refutation, which was a major drawback of the SATCHMO approach. SATCHMORE relevancy, however, is driven by the entire set of negative clauses and no distinction is accorded to the query negation. Under unfavorable circumstances, such as in the presence of large amounts of negative data, this can reduce the efficiency of SATCHMORE. In this paper we introduce a further refinement of SATCHMO called SATCHMOREBID: SATCHMORE with BIDirectional relevancy. SATCHMOREBID uses only the negation of the query for relevancy determination at the start. Other negative clauses are introduced on demand and only if a refutation is not possible using the current set of negative clauses. The search for the relevant negative clauses is performed in a forward chaining mode as opposed to relevancy propagation in SATCHMORE which is based on backward chaining. SATCHMOREBID is shown to be refutationally sound and complete. Experiments on a prototype SATCHMOREBID implementation point to its potential to enhance the efficiency of the query answering process in disjunctive databases.
Keywords: Disjunctive Deductive Databases, Query Answering, Bidirectional Search, Model Generation Theorem Proving, Relevancy
@article{ RE03,
author = {Donald Loveland and Adnan Yahya},
title = {SATCHMOREBID: SATCHMO(RE) with BIDirectional Relevancy},
journal = {Journal of New Generation Computing},
year = {2003},
volume = {21},
number = {3},
pages = {177-207},
url = {http://localhost/sina/publications/#RE03}
}

2002

Ordered Semantic Hyper-Tableaux

Adnan Yahya and David Plaisted
Journa lof Automated Reasoning (JAR):29(1). Pages 17-57 (2002) .

Abstract: A family of tableau methods, called ordered semantic hyper (OSH) tableau methods for first-order theories with function symbols, is presented. These methods permit semantic information to guide the search for a proof. They also may make use of orderings on literals, clauses, and interpretations to guide the search. In a typical tableau, the branches represent conjunctions of literals, and the tableau represents the disjunction of the branches. An OSH tableau is as usual except that each branch B has an interpretation I 0[B] associated with it, where I 0 is an interpretation supplied at the beginning and I 0[B] is the interpretation most like I 0 that satisfies B. Only clauses that I 0[B] falsifies may be used to expand the branch B, thus restricting the kinds of tableau that can be constructed. This restriction guarantees the goal sensitivity of these methods if I 0 is properly chosen. Certain choices of I 0 may produce a purely bottom-up tableau construction, while others may result in goal-oriented evaluation for a given query. The choices of which branch is selected for expansion and which clause is used to expand this branch are examined and their effects on the OSH tableau methods considered. A branch reordering method is also studied, as well as a branch pruning technique called complement modification, that adds additional literals to branches in a soundness-preserving manner. All members of the family of OSH tableaux are shown to be sound, complete, and proof convergent for refutations. Proof convergence means that any allowable sequence of operations will eventually find a proof, if one exists. OSH tableaux are powerful enough to be treated as a generalization of several classes of tableau discussed in the literature, including forward chaining and backward chaining procedures. Therefore, they can be used for efficient query processing.
Keywords: tableaux – semantics – automated theorem proving – literal ordering
@article{ OS02,
author = {Adnan Yahya and David Plaisted},
title = {Ordered Semantic Hyper-Tableaux},
journal = {Journal of Automated Reasoning},
year = {2000},
volume = {29},
number = {1},
pages = {17-57},
url = {http://localhost/sina/publications/#OS02}
}

2000

Minimal Model Generation for Refined Answering of Generalized Queries in Disjunctive Deductive Databases

Adnan H. Yahya
Journal of Data and Knowledge Engineering. Volume 34, Issue 3, September 2000. Pages 219-249.

Abstract: Generalized queries are defined as sets of clauses in implication form. They cover several tasks of practical importance for database maintenance such as answering positive queries, computing database completions and integrity constraints checking. We address the issue of answering generalized queries under the minimal model semantics for the class of disjunctive deductive databases (DDDBs). The advanced approach is based on having the query induce an order on the models returned by a sound and complete minimal model generating procedure. We consider answers that are true in all and those that are true in some minimal models of the theory. We address the issue of answering positive queries through the construction of the minimal model state of the DDDB, using a minimal model generating procedure. The refinements allowed by the procedure include isolating a minimal component of a disjunctive answer, the specification of possible updates to the theory to enable the derivability of certain queries and deciding the monotonicity properties of answers to different classes of queries.
Keywords: Deductive databases; Minimal model generation; Query answering; Integrity constraints; Nonmonotonic reasoning
@article{ MM00,
author = {Adnan H. Yahya},
title = {Minimal Model Generation for Refined Answering of Generalized Queries in Disjunctive Deductive Databases},
journal = {Journal of Data and Knowledge Engineering},
year = {2000},
volume = {34},
number = {3},
pages = {219-249},
month = {September},
url = {http://localhost/sina/publications/#MM00}
}

Positive Unit Hyper-Resolution Tableaux for Minimal Model Generation

Francois Bry and Adnan Yahya
Journal of Automated Reasoning (JAR) . 25(1): 35-82, July 2000.

Abstract: Minimal Herbrand models of sets of rst-order clauses are useful in several areas of computer science, e.g. automated theorem proving, program veri cation, logic programming, databases, and arti cial intelligence. In most cases, the conventional model generation algorithms are inappropriate because they generate nonminimal Herbrand models and can be ine cient. This article describes an approach for generating the minimal Herbrand models of sets of rst-order clauses. The approach builds upon positive unit hyperresolution (PUHR) tableaux, that are in general smaller than conventional tableaux. PUHR tableaux formalize the approach initially introduced with the theorem prover SATCHMO. Two minimal model generation procedures are described. The rst one expands PUHR tableaux depth- rst relying on a complement splitting expansion rule and on a form of backtracking involving constraints. A Prolog implementation, named MM-SATCHMO, of this procedure is given and its performance on benchmark suites is reported. The second minimal model generation procedure performs a breadth- rst, constrained expansion of PUHR (complement) tableaux. Both procedures are optimal in the sense that each minimal model is constructed only once, and the construction of nonminimal models is interrupted as soon as possible. They are complete in the following sense: The depth- rst minimal model generation procedure computes all minimal Herbrand models of the considered clauses provided these models are all nite. The breadth- rst minimal model generation procedure computes all nite minimal Herbrand models of the set of clauses under consideration. The proposed procedures are compared with related work in terms of both principles and performance on benchmark problems.
@article{ PU00,
author = {Francois Bry and Adnan Yahya},
title = {Positive Unit Hyper-Resolution Tableaux for Minimal Model Generation},
journal = {Journal of Automated Reasoning},
year = {2000},
volume = {25},
number = {1},
pages = {35-82},
month = {July},
url = {http://localhost/sina/publications/#PU00}
}

Model Generation for Disjunctive Deductive Databases

Adnan Yahya
Fourth World Multiconference on Systems, Cybernetics and Informatics . Orlando, Florida. July 2000.

Abstract:
Keywords:
@inproceedings{ MG00,
author = {Adnan Yahya},
title = {Model Generation for Disjunctive Deductive Databases},
year = {2000},
address = {Orlando, Florida},
month = {July},
url = {http://localhost/sina/publications/#MG00}
}

1997

Updates in Disjunctive Deductive Databases: A Minimal Model Based Approach

Adnan H. Yahya
Updates in Disjunctive Deductive Databases: Proceedings of the Workshop Logic Programming and Knowledge Representation (LPKR’97) in conjunction with the International Logic Programming Symposium 1997 (ILPS’97). Port Jefferson, Long Island N.Y., Oct 12-16 1997.

Abstract: The issue of updates in Disjunctive Deductive Databases (DDDBs) under the minimal model semantics is addressed. We consider ground clause addition and deletion in a DDDB. The approach of this paper is based on manipulating the clauses of the theory to produce the required change to the minimal model structure necessary to achieve the clause addition/deletion update. First we deal with ground positive clause updates in ground DDDBs. Later we consider positive, then general, clause addition/deletion in the class of range restricted DDDBs. When we give more than one algorithm for a case we comment on the comparative merits and limitations of each. We use the freedom offered by the multiple possibilities for achieving an update to select the one with the least change to the minimal model structure of the theory. We argue that such minimality is desirable if one interprets the minimal model structure as representing the possible states of the modeled world and therefore an update must affect them minimally.
Keywords: Disjunctive Deductive Databases (DDDBs), Database Updates, Minimal Model Semantics, Minimal Model Generation
@inproceedings{ MM97,
author = {Adnan H. Yahya},
title = {Updates in Disjunctive Deductive Databases: A Minimal Model Based Approach},
year = {1997},
address = {Long Island N.Y.},
month = {October},
url = {http://localhost/sina/publications/#MM97}
}

Generalized Query Answering in Disjunctive Deductive Databases: Procedural and Nonmonotonic Aspects

Adnan h. Yahya
Fourth International Conference on Logic Programming and Nonmonotonic Reasoning LPNMR’97. Dagstuhl, Germany, July 28-31, 1997. Lecture Notes on Artificial Intelligence Series, Springer-Verlag Number 1265, July 1997. PP. 324-340.

Abstract: Generalized queries are defined as sets of clauses in implicationform. They cover several tasks of practical importance for database maintenance such as answering positive queries, computing database completions and integrity constraints checking. We address the issue of answering generalized queries under the minimal model semantics for the class of Disjunctive Deductive Databases (DDDBs). Our approach is based on having the query induce an order on the models returned by a sound and complete minimal model generating procedure. We consider answers that are true in all and those that are true in some minimal models of the theory and investigate the monotonicity properties of the different classes of queries and answers.
@inproceedings{ GQ97,
author = {Adnan H. Yahya},
title = {Generalized Query Answering in Disjunctive Deductive Databases: Procedural and Nonmonotonic Aspects},
booktitle = {Logic Programming and Nonmonotonic Reasoning LPNMR’97},
year = {1997},
pages = {324-340},
publisher = {Springer},
address = {Dagstuhl, Germany},
month = {July},
ISBN = {3-540-63255-7},
url = {http://localhost/sina/publications/#GQ97}
}

1996

Minimal Model Generation and Compilation (System Description)

Thomas Br?ggemann, Francois Bry , Norbert Eisinger, Tim Geisler, Sven Panne, Heribert Sch?tz, Sunna Torge, Adnan Yahya

Satchmo: JFPLC’96

Abstract: Satchmo is an automated theorem prover for first-order predicate logic implemented in Prolog. Its reasoning paradigm, model generation, is more powerful than the traditional refutation paradigm. It enabled the development of a novel and efficient technique to compute minimal Herbrand models, which prevents the generation of non-minimal models that would later have to be filtered out in a post-processing step. It also encouraged the development of several advanced efficiency enhancing techniques that result in a highly competitive performance on standard benchmark problems.
@inproceedings{ MM96,
author = {Thomas Br?ggemann, Francois Bry , Norbert Eisinger, Tim Geisler, Sven Panne, Heribert Sch?tz, Sunna Torge and Adnan Yahya},
title = {Minimal Model Generation and Compilation (System Description)},
booktitle = {JFPLC},
year = {1996},
publisher = {Hermes},
address = {Clermont-Ferrand, France},
month = {June},
ISBN = {2-86601-544-4},
url = {http://localhost/sina/publications/#MM96}
}

Minimal Model Generation with Positive Unit Hyper-Resolution Tableaux

Francois Bry and Adnan Yahya
Proceedings of the 5th Workshop on Theorem Proving with Tableaux and Related Methods, Lecture Notes on Artificial.

Abstract: Herbrand models for clausal theories are useful in several areas of computer science. In most cases, however, the conventional model generation algorithms are inappropriate because they generate nonminimal Herbrand models and can be inefficient. This article describes a novel approach for generating minimal Herbrand models of clausal theories. The approach builds upon positive unit hyperresolution (PUHR) tableaux, that are in general smaller than conventional tableaux. To generate only minimal Herbrand models, a complement splitting expansion rule and a specific search strategy are applied. The proposed procedure is optimal in the sense that each minimal model is generated only once, and nonminimal models are rejected before their complete construction. First measurements on an implementation point to its efficiency.
@inproceedings{ MMG96,
author = {Francois Bry and Adnan Yahya},
title = {Minimal Model Generation with Positive Unit Hyper-Resolution Tableaux},
booktitle = {TABLEAUX},
year = {1996},
pages = {143-159},
publisher = {Springer},
address = {Italy},
month = {May},
ISSN = {},
url = {http://localhost/sina/publications/#MMG96}
}

1995

Computing Perfect and Stable Models Using Ordered Model Trees

Jose Alberto Fernandez, Jack Minker and Adnan Yahya
Journal of Computational Intelligence , 11(1):89-112, Feb. 1995.

Abstract:Ordered model trees were introduced as a normal form for disjunctive deductive databases. They were also used to facilitate the computation of minimal models for disjunctive theories by exploiting the order imposed on the Herbrand base of the theory. In this work we show how the order on the Herbrand base can be used to compute perfect models of a disjunctive stratified finite theory. We are able to compute the stable models of a general finite theory by combining the order on the elements of the Herbrand base with previous results that had shown that the stable models of a theory T can be computed as the perfect models of a corresponding disjunctive theory ?T resulting from applying the so called evidential transformation to T. While other methods consider many models that are rejected at the end, the use of atom ordering allows us to guarantee that every model generated belongs to the class of models being computed. As for negation-free databases, the ordered tree serves as the canonical representation of the database.
Keywords: Disjunctive Database, Model Tree, Ordered Model Tree, Perfect Model, Stable Model
@article{ CP95,
author = {Jose Alberto Fernandez, Jack Minker and Adnan Yahya},
title = {Computing Perfect and Stable Models Using Ordered Model Trees},
journal = {Journal of Computational Intelligence},
year = {1995},
volume = {11},
number = {1},
pages = {89-112},
month = {February},
url = {http://localhost/sina/publications/#CP95}
}

1994

Query Evaluation in Partitioned Disjunctive Deductive Databases

Adnan Yahya and Jack Minker
International Journal of Intelligent and Cooperative Information Systems (IJCIS), 3(4):385-413, December 1994.

Abstract: Query evaluation in disjunctive deductive databases is in general computationally hard. The class of databases for which the process is tractable is severely limited. The complexity of the process depends on the structure of the database as well as on the type of query being evaluated. In this paper we study the issue of simplified query processing in disjunctive deductive databases. We address the possibility of evaluating general queries by independently processing their atomic components and describe the class of databases for which this approach is possible. We also discuss the issue of dividing a disjunctive deductive database into a set of disjoint components then answering queries and computing database completions by combining the results obtained against the individual components. Some practical special cases are considered. The methods developed in this paper can be utilized to introduce parallelism into the query evaluation process.
Keywords: Database Completion, Database Partitioning, Disjunctive Deductive Databases (DDDB), Model Trees, Query Evaluation
@article{ QE94,
author = {Adnan Yahya and Jack Minker},
title = {Query Evaluation in Partitioned Disjunctive Deductive Databases},
journal = {International Journal of Intelligent and Cooperative Information Systems},
year = {1994},
volume = {3},
number = {4},
pages = {385-413},
month = {December},
url = {http://localhost/sina/publications/#QE94}
}

Ordered Model Trees: A normal Form for Disjunctive Deductive Databases

Adnan Yahya, Jose Alberto Fernandez and Jack Minker
Journal of Automated Reasoning (JAR) , 13(1):117-143, August 1994.

Abstract: Model trees were conceived as a structure-sharing approach to represent information in disjunctive deductive databases. In this paper we introduce the concept ofordered minimal model trees as a normal form for disjunctive deductive databases. These are model trees in which an order is imposed on the elements of the Herbrand base. The properties of ordered minimal model trees are investigated as well as their possible utilization for efficient manipulation of disjunctive deductive databases. Algorithms are presented for constructing and performing operations on ordered model trees. The complexity of ordered model tree processing is addressed. Model forests are presented as an approach to reduce the complexity of ordered model tree construction and processing.
Keywords: Disjunctive Deductive Database, Ordered Model Trees, Indefinite Information
@article{ OM94,
author = {Adnan Yahya, Jose Alberto Fernandez and Jack Minker},
title = {Ordered Model Trees: A normal Form for Disjunctive Deductive Databases},
journal = {Journal of Automated Reasoning},
year = {1994},
volume = {13},
number = {1},
pages = {117-143},
month = {August},
url = {http://localhost/sina/publications/#OM94}
}

 

Reports

2011

Palestinian E-Government Needs Assessment: Skills Analyses and Training Program

Mustafa Jarrar (BZU), Majd Ashhab (BZU), Radwan Tahboub (PPU), Romain Robert (UoN) with contribution from Mahmoud Saheb (PPU), Ismail Romi (PPU), David Chadwick(TT), Mohammad Jubran (BZU)
Deliverables: D1.1, D2.1, D3.1,Pal-Gov Project (511159-TEMPUS-1-2010-1-PS-TEMPUS-JPHES), May 2011.

Abstract: This report identifies the skills needed to implement and deploy an e-government framework, focusing on the interoperability, security, and legal needs. The report also suggests a training program of six training tutorials, or alternatively, four academic courses, that are necessary to build these skills.
The methodology used to identify the missing skills was analytical. That is, the Palestinian e-government architecture was studied and analyzed and the skills and know-how needed to implement and deploy this were identified in consultation with the governmental and private sectors. We then estimated the present skills and know-how of the Palestinian governmental and private sectors. From the identified present skills and required skills, the missing skills were deduced. This allowed us to derive the Intended Learning Outcomes that are required for the suggested training program.

1996

A Goal-Driven Approach to Efficient Query Processing in Disjunctive Databases

Adnan Yahya
Technical Report Number PMS-FB-1996-12. Institut fuer Informatik, LMU-Muenchen, Germany. Presented at the Dagstuhl Seminar on Disjunctive Logic Programming and Databases: Nonmonotonic Aspects, 1-5 July, 1996, Dagstuhl, Germany, Report Number 150.

Abstract: Generally, proof procedures based on model generation perform bottom-up processing of clauses. Several algorithms for generating (minimal) models for disjunctive theories were advanced in the literature. Used for query answering, bottom-up procedures tend to explore a much larger search space than is strictly needed. On the other hand, top-down processing usually has a more focused search space which can result in more efficient query answering. In this paper we establish a strong connection between model generation and clause derivability that allows us to use a (minimal) model generating procedure for evaluating queries in a top-down fashion. In contrast to other methods our approach requires no extensive rewriting of the input theory and introduces no new predicates. Rather, it is based on a certain duality principle for interpreting logical connectives achieved by reversing the direction of implication connectives in the clauses representing both the theory and the negation of the query. The application of a generic (minimal) model generating procedure to the transformed clause set results in top-down query answering. We explain the reasoning behind the transformation and show how the duality approach can be utilized for refined query answering by specifying the conditions under which the query becomes derivable from the theory. Our initial testing points to a clear efficiency advantage of the advanced approach as compared to traditional bottom-up processing for the class of positive queries against a disjunctive database.
@techreport{ GD96,
author = {Adnan Yahya},
title = {A Goal-Driven Approach to Efficient Query Processing in Disjunctive Databases},
institution = {Institut fuer Informatik},
year = {1996},
number = {150},
address = {Dagstuhl, Germany},
month = {July},
url = {http://localhost/sina/publications/#GD96}
}

 

Thesis

Under Development

Talks and Tutorials

2013

The Next Generation of the Web 3.0: The Semantic Web(Invited Talk)

Mustafa Jarrar (BZU)
TechCon #2, Peeks. Ramallah – Palestine, October 12, 2013

 

Writing a Competitive Proposal Highlights from the e-Government Academy Project (Keynote Speech)

Mustafa Jarrar (BZU)
Tempus Information day, Ramallah, Palestine, January 16, 2013.

 

2012

Europe-Palestine Research Cooperation – Ongoing Projects at Sina Institute at Birzeit University. (Keynote Speech)

Mustafa Jarrar (BZU)
eAGE’12 conference, Dubai, December 13, 2012.

 

Zinnar – The e-Government Interoperability Framework. (Keynote Speech)

Mustafa Jarrar (BZU)
The First National Conference on e-Governance & e-Services. Birzeit, Palestine. June 27, 2012

 

2011

Building A Formal Arabic Ontology (Invited Talk)

Mustafa Jarrar (BZU)
The Experts Meeting On Arabic Ontologies And Semantic Networks. Alecso, Arab League. Tunis, July 26-28, 2011

 

The Palestinian Government Ontology (Invited Talk)

Mustafa Jarrar (BZU)
The Experts Meeting On Arabic Ontologies And Semantic Networks. Alecso, Arab League. Tunis, July 26-28, 2011

 

Ontology-based Data Governance

Mustafa Jarrar (BZU)
eGovernance Workshop, at the Ministry of Telecommunication and Information Technology, Ramallah, Palestine, 28/6/2011.

 

Based Data And Process Governance Framework -The Case Of E-Government Interoperability In Palestine (Research Article)

Mustafa Jarrar (BZU)
the IFIP International Symposium on Data-Driven Process Discovery and Analysis (SIMPDA’11). Campione, Italy. June 30, 2011

 

Arabic Ontology Engineering –Research Challenges and Opportunities (Keynote Speech)

Mustafa Jarrar (BZU)
The International Conference on Intelligent Semantic Web – Services and Applications (ISWSA 2011). Amman, Jordan. 18/4/2011.

Arabic Ontology Engineering –Research Challenges and Opportunities. (Keynote Speech).

Mustafa Jarrar (BZU)
The International Conference on Intelligent Semantic Web – Services and Applications (ISWSA 2011). Amman, Jordan. April 18, 2011

 

2010

eGovernment Interoperability Framework in Palestine (Invited Lecture), at the Service Oriented Architecture Course, Masters on eGovernment

Mustafa Jarrar (BZU)
University of Trento, Italy Dec 2010.

Building an Arabic Ontology, for Information Search and Integration (Invited Talk)

Mustafa Jarrar (BZU)
ExpoTech III, Birzeit University, Palestine. 16/4/2010.

Web 3.0, The third generation of the Web (Invited Talk)

Mustafa Jarrar (BZU)
ExpoTech III, Birzeit University, Palestine. 16/4/2010.

Mechanisms of Connecting IT Colleges in the Fields of Research and Teaching (Invited Speaker)

Mustafa Jarrar (BZU)
Workshop on Evaluation of IT curriculum in Palestinian universities. Arab American University, Palestine. 28/3/2010.

eGovernment Interoperability Framework in Palestine.( Invited Lecture)

Mustafa Jarrar (BZU)
The Service Oriented Architecture Course, Masters on eGovernment. University of Trento, Italy December, 2010

 

Building an Arabic Ontology, for Information Search and Integration (Invited Talk)

Mustafa Jarrar (BZU)
ExpoTech III, Birzeit University, Palestine. April 16, 2010

 

Web 3.0, The third generation of the Web (Invited Talk)

Mustafa Jarrar (BZU)
ExpoTech III, Birzeit University, Palestine. April 16, 2010

 

Mechanisms of Connecting IT Colleges in the Fields of Research and Teaching. (Invited Speaker)

Mustafa Jarrar (BZU)
Workshop on Evaluation of IT curriculum in Palestinian universities. Arab American University, Palestine. March 28,2010

 

2009

Legal Ontology Engineering (30-hours Tutorial)

Mustafa Jarrar (BZU)
Institute of Law, Birzeit University, Palestine. 2009

 

Ontology-Based e-Governance.( Invited Lecture)

Mustafa Jarrar (BZU)
E-government Course. Birzeit University, Palestine. December 17,2009.

 

 

Tools

Under Development

Content and Services

Under Development