Files
bison/tests/reduce.at
Akim Demaille e9955c8373 Have Bison grammars parsed by a Bison grammar.
* src/reader.c, src/reader.h (prologue_augment): New.
* src/reader.c (copy_definition): Remove.
* src/reader.h, src/reader.c (gram_start_symbol_set, prologue_augment)
(grammar_symbol_append, grammar_rule_begin, grammar_midrule_action)
(grammar_current_rule_prec_set, grammar_current_rule_check)
(grammar_current_rule_symbol_append)
(grammar_current_rule_action_append): Export.
* src/parse-gram.y (symbol_list_new, symbol_list_symbol_append_
(symbol_list_action_append): Remove.
Hook the routines from reader.
* src/scan-gram.l: In INITIAL, characters and strings are tokens.
* src/system.h (ATTRIBUTE_NORETURN, ATTRIBUTE_UNUSED): Now.
* src/reader.c (read_declarations): Remove, unused.
* src/parse-gram.y: Handle the epilogue.
* src/reader.h, src/reader.c (gram_start_symbol_set): Rename as...
(grammar_start_symbol_set): this.
* src/scan-gram.l: Be sure to ``use'' yycontrol to keep GCC quiet.
* src/reader.c (readgram): Remove, unused.
(reader): Adjust to insert eoftoken and axiom where appropriate.
* src/reader.c (copy_dollar): Replace with...
* src/scan-gram.h (handle_dollar): this.
* src/parse-gram.y: Remove `%thong'.
* src/reader.c (copy_at): Replace with...
* src/scan-gram.h (handle_at): this.
* src/complain.h, src/complain.c (warn_at, complain_at, fatal_at):
New.
* src/scan-gram.l (YY_LINES): Keep lineno synchronized for the
time being.
* src/reader.h, src/reader.c (grammar_rule_end): New.
* src/parse.y (current_type, current_class): New.
Implement `%nterm', `%token' support.
Merge `%term' into `%token'.
(string_as_id): New.
* src/symtab.h, src/symtab.c (symbol_make_alias): Don't pass the
type name.
* src/parse-gram.y: Be sure to handle properly the beginning of
rules.
* src/parse-gram.y: Handle %type.
* src/reader.c (grammar_rule_end): Call grammar_current_rule_check.
* src/parse-gram.y: More directives support.
* src/options.c: No longer handle source directives.
* src/parse-gram.y: Fix %output.
* src/parse-gram.y: Handle %union.
Use the prologue locations.
* src/reader.c (parse_union_decl): Remove.
* src/reader.h, src/reader.c (epilogue_set): New.
* src/parse-gram.y: Use it.
* data/bison.simple, data/bison.c++: b4_stype is now either not
defined, then default to int, or to the contents of %union,
without `union' itself.
Adjust.
* src/muscle_tab.c (muscle_init): Don't predefine `stype'.
* src/output.c (actions_output): Don't output braces, as they are
already handled by the scanner.
* src/scan-gram.l (SC_CHARACTER): Set the user_token_number of
characters to themselves.
* tests/reduce.at (Reduced Automaton): End the grammars with %% so
that the epilogue has a proper #line.
* src/parse-gram.y: Handle precedence/associativity.
* src/symtab.c (symbol_precedence_set): Requires the symbol to be
a terminal.
* src/scan-gram.l (SC_BRACED_CODE): Catch strings and characters.
* tests/calc.at: Do not use `%token "foo"' as it makes not sense
at all to define terminals that cannot be emitted.
* src/scan-gram.l: Escape M4 characters.
* src/scan-gram.l: Working properly with escapes in user
strings/characters.
* tests/torture.at (AT_DATA_TRIANGULAR_GRAMMAR)
(AT_DATA_HORIZONTAL_GRAMMAR): Respect the `%token ID NUM STRING'
grammar.
Use more modest sizes, as for the time being the parser does not
release memory, and therefore the process swallows a huge amount
of memory.
* tests/torture.at (AT_DATA_LOOKAHEADS_GRAMMAR): Adjust to the
stricter %token grammar.
* src/symtab.h (associativity): Add `undef_assoc'.
(symbol_precedence_set): Do nothing when passed an undef_assoc.
* src/symtab.c (symbol_check_alias_consistence): Adjust.
* tests/regression.at (Invalid %directive): Remove, as it is now
meaningless.
(Invalid inputs): Adjust to the new error messages.
(Token definitions): The new grammar doesn't allow too many
eccentricities.
* src/lex.h, src/lex.c: Remove.
* src/reader.c (lastprec, skip_to_char, read_signed_integer)
(copy_character, copy_string2, copy_string, copy_identifier)
(copy_comment, parse_token_decl, parse_type_decl, parse_assoc_decl)
(parse_muscle_decl, parse_dquoted_param, parse_skel_decl)
(parse_action): Remove.
* po/POTFILES.in: Adjust.
2002-06-11 20:16:05 +00:00

292 lines
5.3 KiB
Plaintext

# Exercising Bison Grammar Reduction. -*- Autotest -*-
# Copyright 2001 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
AT_BANNER([[Grammar Reduction.]])
## ------------------- ##
## Useless Terminals. ##
## ------------------- ##
AT_SETUP([Useless Terminals])
AT_DATA([[input.y]],
[[%verbose
%output="input.c"
%token useless1
%token useless2
%token useless3
%token useless4
%token useless5
%token useless6
%token useless7
%token useless8
%token useless9
%token useful
%%
exp: useful;
]])
AT_CHECK([[bison input.y]])
AT_CHECK([[sed -n '/^Grammar/q;/^$/!p' input.output]], 0,
[[Terminals which are not used:
useless1
useless2
useless3
useless4
useless5
useless6
useless7
useless8
useless9
]])
AT_CLEANUP
## ---------------------- ##
## Useless Nonterminals. ##
## ---------------------- ##
AT_SETUP([Useless Nonterminals])
AT_DATA([[input.y]],
[[%verbose
%output="input.c"
%nterm useless1
%nterm useless2
%nterm useless3
%nterm useless4
%nterm useless5
%nterm useless6
%nterm useless7
%nterm useless8
%nterm useless9
%token useful
%%
exp: useful;
]])
AT_CHECK([[bison input.y]], 0, [],
[[input.y contains 9 useless nonterminals
]])
AT_CHECK([[sed -n '/^Grammar/q;/^$/!p' input.output]], 0,
[[Useless nonterminals:
useless1
useless2
useless3
useless4
useless5
useless6
useless7
useless8
useless9
]])
AT_CLEANUP
## --------------- ##
## Useless Rules. ##
## --------------- ##
AT_SETUP([Useless Rules])
AT_DATA([[input.y]],
[[%verbose
%output="input.c"
%token useful
%%
exp: useful;
useless1: '1';
useless2: '2';
useless3: '3';
useless4: '4';
useless5: '5';
useless6: '6';
useless7: '7';
useless8: '8';
useless9: '9';
]])
AT_CHECK([[bison input.y]], 0, [],
[[input.y contains 9 useless nonterminals and 9 useless rules
]])
AT_CHECK([[sed -n '/^Grammar/q;/^$/!p' input.output]], 0,
[[Useless nonterminals:
useless1
useless2
useless3
useless4
useless5
useless6
useless7
useless8
useless9
Terminals which are not used:
'1'
'2'
'3'
'4'
'5'
'6'
'7'
'8'
'9'
Useless rules:
#2 useless1: '1';
#3 useless2: '2';
#4 useless3: '3';
#5 useless4: '4';
#6 useless5: '5';
#7 useless6: '6';
#8 useless7: '7';
#9 useless8: '8';
#10 useless9: '9';
]])
AT_CLEANUP
## ------------------- ##
## Reduced Automaton. ##
## ------------------- ##
# Check that the automaton is that as the for the grammar reduced by
# hand.
AT_SETUP([Reduced Automaton])
# The non reduced grammar.
# ------------------------
AT_DATA([[not-reduced.y]],
[[/* A useless token. */
%token useless_token
/* A useful one. */
%token useful
%verbose
%output="not-reduced.c"
%%
exp: useful { /* A useful action. */ }
| non_productive { /* A non productive action. */ }
;
not_reachable: useful { /* A not reachable action. */ }
;
non_productive: non_productive useless_token
{ /* Another non productive action. */ }
;
%%
]])
AT_CHECK([[bison not-reduced.y]], 0, [],
[[not-reduced.y contains 2 useless nonterminals and 3 useless rules
]])
AT_CHECK([[sed -n '/^Grammar/q;/^$/!p' not-reduced.output]], 0,
[[Useless nonterminals:
not_reachable
non_productive
Terminals which are not used:
useless_token
Useless rules:
#2 exp: non_productive;
#3 not_reachable: useful;
#4 non_productive: non_productive useless_token;
]])
# The reduced grammar.
# --------------------
AT_DATA([[reduced.y]],
[[/* A useless token. */
%token useless_token
/* A useful one. */
%token useful
%verbose
%output="reduced.c"
%%
exp: useful { /* A useful action. */ }
// | non_productive { /* A non productive action. */ } */
;
//not_reachable: useful { /* A not reachable action. */ }
// ;
//non_productive: non_productive useless_token
// { /* Another non productive action. */ }
// ;
%%
]])
AT_CHECK([[bison reduced.y]])
# Comparing the parsers.
cp reduced.c expout
AT_CHECK([sed 's/not-reduced/reduced/g' not-reduced.c], 0, [expout])
AT_CLEANUP
## ------------------- ##
## Underivable Rules. ##
## ------------------- ##
AT_SETUP([Underivable Rules])
AT_DATA([[input.y]],
[[%verbose
%output="input.c"
%token useful
%%
exp: useful | underivable;
underivable: indirection;
indirection: underivable;
]])
AT_CHECK([[bison input.y]], 0, [],
[[input.y contains 2 useless nonterminals and 3 useless rules
]])
AT_CHECK([[sed -n '/^Grammar/q;/^$/!p' input.output]], 0,
[[Useless nonterminals:
underivable
indirection
Useless rules:
#2 exp: underivable;
#3 underivable: indirection;
#4 indirection: underivable;
]])
AT_CLEANUP