نتایج جستجو برای: chunk
تعداد نتایج: 1706 فیلتر نتایج به سال:
The present paper identifies the mistakes made by a data driven Bengali chunker. The analysis of a chunk based machine translation output shows that the major classes of errors are generated from the verb chunk identification mistakes. Therefore, based on the analysis of the types of mistakes in the Bengali verb chunk identification we propose some modules. These modules use tables of manually ...
We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. MultiGranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We...
Currently, large amounts of information exist in Web sites and various digital media. Most of them are in natural language. They are easy to be browsed, but difficult to be understood by computer. Chunk parsing and entity relation extracting is important work to understanding information semantic in natural language processing. Chunk analysis is a shallow parsing method, and entity relation ext...
The paper presents our work on the annotation of intra-chunk dependencies on an English treebank that was previously annotated with Inter-chunk dependencies, and for which there exists a fully expanded parallel Hindi dependency treebank. This provides fully parsed dependency trees for the English treebank. We also report an analysis of the inter-annotator agreement for this chunk expansion task...
This paper reports on our research to build a large-scale Tsinghua Chinese Treebank (TCT). We propose a two-stage approach to reduce manual proofreading labors as much as possible. The insertion of an intermediate functional chunk level creates a good information bridge to link simple chunk annotation with detailed syntactic tree annotation. We describe our chunk and tree annotation schemes, fo...
We present two approaches (rule-based and statistical) for automatically annotating intra-chunk dependencies in Hindi. The intra-chunk dependencies are added to the dependency trees for Hindi which are already annotated with inter-chunk dependencies. Thus, the intra-chunk annotator finally provides a fully parsed dependency tree for a Hindi sentence. In this paper, we first describe the guideli...
Hackystat, an automated metric collection and analysis tool, adopts the “Most Active File” measurement in five-minute time chunks to represent the developers’ effort. This measurement is validated internally in this report. The results show that big time chunk sizes are highly linear regressive with the standard time chunk size (1 minute). The percentage of missed effort to total effort is very...
In a pilot investigation, aiming to develop new methodological insights into the study of perceptual modelling of the macro-prosodic organization of spoken Swedish, different aspects of the listeners’ variation were studied. Two listener groups, students at the beginner’s level and trained phoneticians, had to mark the most prominent words and the chunks they could hear in speech samples of spo...
This paper is about syntactic analysis of natural language sentences. The focus is on wide coverage partial parsing architectures. In this work we enhance and enrich the UCSG shallow parsing architecture being developed here over the last many years. UCSG architecture combines linguistic grammars in the form of Finite State Machines for recognising all potential chunks and HMMs to rate and rank...
We present our automatic repair technique, ssFix, which uses syntactic code search to find candidate code that is bug-related and contains the correct fix from both the local project and an external code repository for bug repair. ssFix first identifies suspicious statements in the buggy program through fault localization. For each such statement, ssFix identifies a buggy code chunk which inclu...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید