I've grouped the imperative and declarative models separately, and ordered them more-or-less chronologically within each group.
Most of these models are discussed at length in my Master's thesis.
Offline sources: [Chri 90], [Shut 93].
Contents:
Imperative
ECFGs - 1970
DTTs - 1984
Modifiable Grammars - 1990
Declarative
Dynamic Syntax - 1973
Christiansen Grammars - 1985
RAGs - 1993
Wegbreit was unable to determine whether all ECFLs are context-sensitive.
See: Wikipedia article "ECL programming language".
Offline sources: [Wegb 70].
A DTT is essentially a two-level grammar, in which rule-set modification instructions are part of the rules instantiated from rule schemata. Modification instructions are applied by the derivation relation.
Without the adaptivity mechanism, DTTs would be van Wijngaarden grammars using a different terminology; and van Wijngaarden grammars are Turing-powerful. In fact, when Mason proved that DTTs are Turing-powerful, he didn't use the adaptivity at all.
Offline sources: [Maso 84], [Maso 87].
A Modifiable Grammar consists of a CFG and a Turing transducer. Given a partial derivation as input, the Turing transducer outputs a list of rules to be added, and another to be deleted. Following the directions of this transducer, the parser modifies the rule set before each derivation step.
Since the purpose of the rule-set modifications is to guide the decisions made by the parser, the transducer must be designed with precise knowledge of the order in which the parser will make those decisions. To allow for this, Burshteyn actually defines two grammar models, called TDMGs and BUMGs (Top-Down Modifiable Grammars and Bottom-Up Modifiable Grammars; I suspect that Burshteyn generally doesn't pronounce acronyms.)
As you might expect of a grammar with an embedded Turing Machine, Modifiable Grammars are Turing-Powerful.
Offline sources: [Burs 90], [Robe 91], [Burs 92].
In his Survey Of Adaptable Grammars [Chri 90], Christiansen aptly characterizes Dynamic Syntax:
Hanford and Jones (1973) have suggested a concept of dynamic syntax which is a monstrous device based on the -calculus. . . . Unfortunately the approach is not given a proper formalization or accompanied with an appropriate notation.
Offline sources: [Hanf 73].
A Christiansen grammar is essentially an EAG in which every nonterminal has a specially designated grammar attribute whose value is a set of grammar rules. Each nonterminal is expanded, during derivation, via a rule taken from the value of its grammar attribute. Rule sets are explicitly manipulated in the semantic equations for the grammar attributes (thus satisfying my definition of an adaptive grammar); and adaptations follow the block structure of the parse tree, since all rule-set values are localized to particular nodes of the tree.
Since Christiansen grammars add adaptivity to attribute grammars, and attribute grammars are already Turing-powerful, naturally Christiansen grammars are Turing-powerful.
Although he originally based his model on Watt and Madsen's EAGs, Christiansen reformulated the model around DCGs (Definite Clause Grammars) around 1990 and has clearly preferred the latter ever since.
A superficial idiosyncrasy of the earlier, EAG-based adaptive grammars is that Christiansen called them generative grammars, thus overloading the standard meaning of that term. (Chomsky grammars use a generative, or transformational, approach to linguistics in preference to the historically orthodox descriptive approach.) The later, DCG-based adaptive grammars present no such difficulty, as he has called them Generative Clause Grammars, GCGs. I have chosen to use the term Christiansen Grammars to refer to his adaptive grammar model in both of its formulations.
Offline sources: The original paper is [Chri 85]; a more developed treatment of the model is [Chri 88a]. The DCG-based formulation is described in [Chri 90], and carried further in the direction of logic meta-programming by [Chri 92a, Chri 92b].
A RAG does not have separate value domains for terminals, nonterminals, and attributes. Instead, each RAG has a single domain called an answer algebra, whose values (answers) are all capable of acting as syntax, metasyntax, or semantics depending on how they are used. (See What is a grammar?.)
RAG unbound rules are similar to EAG rule forms, but in place of attributed nonterminals, an unbound rule has ordered pairs of polynomials. The first component of each pair plays a metasyntactic role, and the second, semantic. Answers are syntax when they occur outside of the ordered pairs.
Each answer has an associated set of unbound rules; these determine an associated set of possible bound rules, formed by substituting values for the variables in each unbound rule (using the answer itself as the metasyntax value on the left side of the rule). The derivation relation can use bound rules to expand ordered pairs of answers.
In addition to the operators defined by the answer algebra, polynomials in an unbound rule can also use a special binary operator called the query operator. The query operator treats its first operand as metasyntax and its second as syntax, and "returns" a semantic value. Actually, the reduction of a query (i.e. polynomial using the query operator) to an answer is accomplished by the derivation relation, using a potentially unbounded number of steps. In effect, the query operator allows an RAG parser to invoke itself recursively as part of a derivation step; hence the word "recursive" in the name of the model.
RAGs have both structured conditionals (through selection of bound rules during derivation) and recursion (through application of the query operator). So, naturally, RAGs are Turing-powerful.
Offline sources: [Shut 93].