Discussion of the main problems in the project - Logics
Excpetion TS:
In pw we have chosen, concerning the organization of sentences (and sentence types) an intermediate way. As a 'sentence' E in pw we regard a short sequence of natural language sentences, which, against the meta-level, in, below or above the context S itself, explains a 'closed thougth'. Every such sentence E has to be defined according to one of the given base sentence types WW, EW or DW.
If one now implements its own sentences or those thougths of others, we find, that when doing logical formalization, not all atomic sentences adhere to the chosen base type of the sentence E (one of them might not be for example an assertion but a definition). The problem with it is, when trying to compute the consitency of the atomic sentences in 2BR we must handle then logical formulas with evaluations set, which normally dont belong together.
Example: given E1 converted into assertional logic with the simple formula 'a u b o c'. What to do when a and c are asserions and b a definition? There are no truth tables beforehand, which handle this.
Which possibilties we have to resolve this?
1.) One can force the user, to split up his thought E into a sentence where all parts belong to the same base type, and so with the others, and gets so all of them inside one logical formala including the logical operators
2.) One allows exceptions
An real example from pw is:
S192 *E23, DW:
E23, EN:
DW, TS1: On the basis of E22 we can define the concept of recognition in TS2 as follows. [K1]
DW, TS2: Recognition is effective action, ...
WW, TS3: (effective action as in TS2 means), operational effectivity in the environment of existence of the organism.
This is formalized now as: 'TS1 u TS2 u WWTS3[10]', i.e. the exception WW is part of the logical formula. See problem 3 below !
In pw was choosen the second variant 2. with all negative consequences. The reason is, that practically this case occurs quite often, which otherwise means, that if the user doesn't transform his thought into propositional logic, he must correct all his build 1BR relations afterwards, if he encouters, that the sentence E entails an atomic one, which doesnt fit to the base E type (I encouterd also a case with all three base types inside one E, but this will not be allowed).
pw so allows (at this time) one exception type TS for one TS formula inside an E. The handling is defined in PWR59E34f.; in 1BR such 'exception TS' sentences (atomic sentences) must be evaluated to the default in the context of E, otherwise the whole E isnt evaluable (the practical handling of this approach concerning the computation is another problem...!).
The backround is following:
Are there relations in 1BR or propositional logic closures inside 2BR between different sentence types XW occuring in the session, the user has to define (in the second case) how such closures should work in 2BR (in 1BR these are handled on an intuitive basis...see above). And especially such closures occur in 1BR when both sentence types are mixed !
Example: see the famous example of Hume to don't do a closure form the beiing to something that should be (see D. Hume: 'An inquiry concerning human understanding').
3.) A problem that follows: how to handle a natural language sentence SE of a mixed type and logical operators, that connect those which doesn't belong together and are not to be evaluated together? Do we create separate sentence sets, which are held togehter by common logical operators? But are these operators clearly defined then? What if not? This issue isn't still resolved even in PWR !
3.A) Presumably there might work extraction rules for 2BR, i.e. for different logical operators, rules how to extract exception TS from the logical formula and put into a separate one
3.B) splitt the formulas allready according rules in PWR (TBD) manually at design time
The impractical alternative would be, to make philsophy via writing atomic sentences directly...Carnap migth have liked that ...
Logics WW:
We discussed base/starting assumptions concerning logics and truth values in the first chapter above.
Concerning the base interpretations of a t/f/... logic (WW) for the 1BR we can separate following main problems:
1.) Is negation truth funcional or not
2.) Interpretation of binary logical operators (in many valued logics)
3.) Not all concatenated sentences can be easely transfered into logical formulas
1.A) The biggest Problem with many valued logics, which are used in 1BR from the start (WW with the values 10, 20, 30 and 40) is the interpretation of the negation. If we negate a sentence SE, or if this sentence entails a negation, it should be clear what the consequences are. But as we don't interpret the WWvalues, and thus the logical operators vice versa and the truth value tables WW (PWR25) are just recommendations, it is up to the user how he decides, when evaluating relations between sentences SE in 1BR. Same holds concerning the consistency of his/her chosen approach. It is up to him/her.
The negation can be interpreted in two basic ways:
a) not true means false
b) not true means all other values except true, i.e. 20, 30, 40; which has at a consequence, that (even when chosing only one such case) this logic must be regarded as 'non-truth functional', with all aspects following from that. These consequneces are such, that the result of the evaluation of a logically concatenated set of sentences or logical formulas isn't any more one truth value but more of them (in the sense of the mathematical notion of a function). How to deal with that, and in the following 2BR, has to be defined by the user too, when defining his logics for the 2BR.
NB; in fact this plays a bigger role in the 2BR, because, as mentioned above, the 1BR has no interpretation of the logical operators from the start, i.e the computation 1BR relies on big parts on the intuition of the user without control of the processing machine.
c) any intermediate variants might be chosen (for WW, EW, DW) and be defined then for 2BR
Follow-up problems of that are:
1.B) the relations and hence the consistency computation1BR are underdetermined; which might be resolvable via the definitions for 2BR.
1.C) Every user implements his relations between sentences with another concept of logic in mind, so others will not adhere his explenations when trying to chose these relations in 1BR. The workaround for this might be at the moment, to postpone such relations, as a user decides by himself. Creating relations between SE or TS might be also possible later in 2BR and these are able to be modified by every user from 2BR (the 'override' concept in pw) but this is TBD...
1.D) The basic problems of such many valued logics remain, if they are used, to be decided case by case by the user
1.E) The variants a) to c) might all be convincing. An intuitive clear logical solution is presumably given only with the classical 2valued logic (for details see L18., N. Rescher in pw )
1.F) Even for 1BR one can decide to use two different 'negations' in an intuitive way (it is not assumed but also not forbidden in PWR to use different ones). For 2BR they can be defined in truth tables. This can be understood in a more generalized way: one can use more than one unary logical operator.
a) then it must be defined, how to handle double (multiple) negations, also in the combined form.
b) Are there also good interprations for these cases a)? If not, must they be forbidden?
c) If using even different interpreted operators (see 'not undecidable' Rescher L18 p155 and p163 Slupecki's T-operator), do make a combination of them any sense? (see a similar discussion in articles concerning modal logics: Does it make sense to say a sentence b is necessarilly necessary, i.e something like NNb ?)
d) how to handle such cases, when traversing logical boundaries of ranges or types (see last chapter%%L)?
2.A) To binary logical operators in many valued logics there is a bunch of literature and interpretations with pro/cons in various contexts (temporary logics, intuitionistacal...and to same contexts different concepts like in epistemic logics ...). Depending on their definitions of the truth tables (in case of WW), they have some advantages over the 2-valued logics and disadvantages (see also Rescher L18), i.e. they solve from the logical perspective and the applicational one some problems and create new ones, like classical closures or tautologies, that cannot hold for both, 3-valued negation and binary operator interpretations.
Example:
The classical tautology p and ~p must/should give always false. But in the case of one intermediate truth value I, for 'undecidable', this formula gives as a result I, when interpret I in the most intuitive way and put I for p (L18 around p109). This means, we loose one classical tautology, with further effects on other derivable classical taustologies. Other interpretations of I pose other problems...
3.) Not all concatenated sentences can be easaly transfered into logical formulas, means:
3. A) Often natural language operators between sentences behave like logical ones between atomic sentences, like "because". But they are no logical ones, and must be rewritten, including the complete SE.
Example: 'because' behaves a little bit like 'if ..then', but cannot hold the comparison. 'If .. then' is much stronger whereas 'because' assumes only some, in detail unknown cause, and 'behaves' otherwise the same like the 'if ...then'. If we want to use a real logical operator like 'if ...then' ( =>) we have to rwrite the SE. But there are no rules for that how to...and we cannot just invent missing facts, that would provide us with missing informations to rewrite the natural language sentence in if-then form.
Isn't it easier to just forbid all formulas with mixed XW-types ? Not without solving the same problem in 1BR, i.e. that closures (or in general logical formulas) between/with different sentence types (WW, DW, EW) are basically possible and also practically used (if correctly, will be the next question).
Logics EW:
Some things, discussed with WW logics, can be applied 1:1 to other logics.
1.) The problem of negations remains similar
2.) The problems with binary operators are similar
1 and 2.A) According to the relevancy and justification of other logic types, here for EW some interpretation problems, an
Example: Given the situation, that one wants to state that either a or b should/must be decided. we have then ~(a u b). Lets evaluate some examples. If we decide to stick with a we evaluate ~(50 u 60). Then we depend on the logics given to the conjunction in the EW logic. If we let it be similar to WW we get ~60, making finally 50, if we use the classical negation interpretation. If we use a non-truth functional one we might finish with [50, 40], i.e. either yes or not-evaluable, based on the pw starting values. This seems a reasonable example, showing that also for non-propositional sentences we can use a logic and have similar problems as with assertions.
But what happens when we have to evaluate one of the sentences in the example with 40? Say we get ~(50 u 40). With the pw default interpretation we get ~40 and then what?: There is no classical negation result to ~40; the pw default is again 40. Non-truth functional it will be [50, 60]. These result are completely different ! Which one is the rigth/better one and what will be the side effects in either case ?
Logics DW:
In the case of EW sentences it seems to be straight forward, but: do we need an extra logic for definitions? Basically they are not propositions, but rules for new signs, which we accept or deny. Therefor it seems to be necessary to handle them not as propoistions. But does it make sense, to use them in logical formulas?
Example:
Given the mutually exclusive definitions of i as given above (L%%). What about the formula 'a u b' standing for both?
1.) In propositional logics we dont look inside the atomic sentences; so from this point we don't 'see' that they are mutually exclusive.
2.) If we evaluate this sentence we might use yes (70) for one and no (80) for the other and end up with no, as the conjunction would classically give only yes if both are yes. This makes sense here.
3.) What if both definitions have nothing in common? From a logical point of view this would have no impact. The same holds for propositional sentences too !
Another differentiating point might be the pw distinction between pure definitions and explaining definitions. If we really need DW logics migth be decided by looking for practical examples inside pw....
Metalogic(s) needed?:
There are two major works targeting the problems of metalogics and the possibility to fix all possible (sensefull) sentences in formal logics:
The one is by A. Tarski (L17 in pw) explaining why formal logics, which should be able to speak about its own concepts, needs a separate formal metalogic and the other the famous work of K. Geodel concerning logical system similar to Russel/Whiteheads Principia Mathematica. In the Principa Russel tried to avoid antinomies (like his set antinomy) by deploying a theory of types.
But with Goedels result we, simply speaking, finish up with the result, that, given formal languages with a bigger expresivenss (see Tarski), there exist sentences, which cannot be evaluated in a formal system as true or false or they would create contradicitions.
Both aspects are targeted in newer logical investigations subsumed under the title paraconsistent logics. There are different schools, but the common topics are: there are true inconsistencies and we should accept them and try to build a logic able to handle this and we need no different formal logics, just one (formal language, as opposed to Tarskis concept).
The point is, that this approach targets also the question, if we need a metalogic in pw defined beforehand, to define our object logics.
After this short intro the topics:
1.) pw conceptually tries to compute the consistency of sentences. But íf there are 'true' inconsistencies it should be able to handle such cases too. We use for now the approach defining the 1BR metalogic via the EW type, avoiding thus the question, which approach 'is' the right one (Tarski vs Paraconsistent ones) and avoiding sentences having the value 30 (in EW, DW). This approach is perhaps new, as normal literature doesn't combine this base problem with different logical types (the XW) and separate computation steps! If this concept holds has to be shown...
2.) In natural languages there is no distinction between metalayers (except for the writing rule as 'The word "word" has four letters.'). If we want to formalize a thougth SE in pw in propositional logics, we face the problem, that one of the atomic sentences might be a meta-sentence, but is still bound by logical operators to the other atomic sentences. See example for F below.
A) One question is, if such mixed sentences are basically ok, even in natural language; ...but they occur !
B) Propositional logics per se ignore the content of atomic sentences. From this point of view it might be legitimate, to regard even metasentences as part of the logical formula. But the problem comes back with (the default) predicate logics for 3BR. There the content of an atomic sentence is not to be ignored and it cannot be, that we have to rewrite the propositional logic formula when switching to predicate logics ! If there are scientific articles on this topic is unknown for me...
C) pw uses for this case an own approach (see PWR). But what is 'right', is a question still open, but where the answer in the case of pw 1BR, would come in any case too late (see D).
D) Independent on which variant one choses for the problems B) and C), the TS must be defined/implemented, i.e. this is staticaly not possible on the pure base of PWR. The rules PWR are presumptions in pw, to enable the user, to evaluate also in 1BR the relations of the SE correctly, by seeing the propositional logic structure of the SE to be evaluated (even on 1BR level). ..Here practical aspects confront with theoretical ones. A solution isnt given so far. Its just a pragmatic intermediate way...
E) A similar problem as the meta-sentences occurs, by the way, with the mixed sentence types (see above: the SE type might be WW but there is an atomic EW inside it). Here we also have to extract this one from the formula or if not was should else be done with the formula?...
F) The basic question is, if natural language sentence sequences which entail a metasentence must be formalized with a formal language reflecting such a metasentence, i.e. the formal language itself must entail a higher level or is the formal language independenend of any metalevels inside the natural language it formalizes? In short: is the formalizing language an independend language of the language it formalizes? If so, both have nothing in common, which seems to be strange (NB: we chose a speacial language to formalize our natural language), but if they belong someway together, how different metalayers of a) the natural language ones to be formalized and b) assertions about the formal language itself should/must be separated?
Example (F):
Given the sentence: Snow is white and the word 'word' has 4 characters.
a.) The sentence might have the meaning the natural language word 'word' is intended
b.) The sentence might have the meaning that all 'words' are included and thus, also words, i.e. predications, in our formal language
c.) The sentence might have the meaning/context, that just 'words' of our formal language choosen are intended. That means, it would be a proposition just about our formal language.
How a formalization of these cases should handle them, given the options 1 or 2 above? We then need at least 6 cases, a-c in a paraconsistent approach and the same for the Tarski approach.
For a) we need a second order logic allowing us not only the predication word(x) but also has four letters(word). This would be the same in both approaches. In case c) in the Tarski apporach we need a formal language allowing us the predication over the/some language elements of our formal logic. In the paraconsistent way the language would stay the same, details TBD...
In b) we would need perhaps a reference on 'all' languages or a enumerated set of languages we potentialy refere to. Those are natural and formal ones.
Does these fast investigations help us concerning the questions above? Why (I think that) not ?
Need for a propositional logic definition for 1BR? 'Backtrace' ?!:
1BR is assumed to be a evaluated on the PWR definitions and by user intuition. How do we get the consistency assured between any followup in 2BR where the user uses his 2BR logics defined in 1BR, when 1BR isn't defined in a complete formal way? The idea is to use the 2BR upgrade definitions later on in a backtrace step, to check if both, 2BR logics and upgrade Step1 'definitions' are consistent.
This problem is targeted for now, just the way, that we introduced the Step1 definitions to be at least perhaps able to do a consistency check between 1BR and 2BR.
But will we be able? Don't we need a full defined 1BR logic, to go consistently to another one in 2BR. Can we verify the upgrade consistency without having a 1BR logic given ?
...further discussion is postponed for now !
Equality of formal sentences (in contexts or outside contexts):
When trying to process the consistency of propositional logic sentences we might question if all sentences given for processing and their logical formulas really contain unique atomic sentences? What about a formula SExTS1 u... SEzTS3 where the TS are the same? If one then give both different evaluations, say true and false, because the words in the natural language are just different, we might get consistency where there should be an error (as 'same' atomic sentences cannot be evaluated differently)!
Another question is, if there is a possibility to autodetect same sentences ?
We can distinguish two cases for pw. Given atomic sentences in the same context (potentially provided by different users) these might belong to the same context S inside pw or not. In pw we use following working hypothesis so far (if this holds has to be investigated on detail examples when really processing the 2BR, which might be the case from pw2.2 on ...):
If sentences TS belong to the same context S, their words are assumed to belong to the same context. As otherwise one can question, if one of them is put in at the correct place; but the context concept inside pw isn't much more exact then in natural language itself...
If so, the words in the sentences are regarded as comparable inside this context, otherwise not.
A simple example for the second case might be in German talking about a 'Bank', in one case it might be a monetary institute and in another a peace of wood to sit on (there are even more meanings in German for this term...). The other way round we have different words meaning the same, like 'unverheiratet sein' and 'Junggeselle' in German. In short the classical synonym, homonym,... distinctions.
Given now the case, that we have such two sentences TS inside the same context, how we can see that they should be logicaly identical? Some resolution approaches:
1) One might agree with a manual approach to systematically investigate or just by accident see such case occur and create a mapping table. The program can trace for every TS evaluated and map it allways to the first occurence evaluated (there might be more than two sentences TS be ident).
2) We might use some existing words mapping tables combined with a NLP analysis of our TS sentences (into predicate logic or do just syntactical or even semantical analysis) as an input for such a doublette detection of TS sentences (a) inside of a common pw context or even b) outside).
3) has anyone other ideas ?: statistical apporaches, ....
4) any scientific articles out there which discuss this problem, where we can use or test such approach ? Good keywords to google with for ?
5) One can argue, that it doesn't matter for propositional logics, if the same atomic sentences are split into different ones (e.g. an a gets a, b in the same or in different formulas) as this logic doesn't take the inside of sentences into account. But what happens then, if formalizing them into predicate logic? A formula turns out to be different, and even retransfered into predicate logic, just because two atomic sentences TS really 'are' the same? (and so the evaluated propositional formula, and in case the whole consistency computation was wrong...).
6) What about the case of different contexts? Can we have still same sentences in different contexts? There are at least two possibilities even inside pw.
A) pw contexts are defined (till now!) by its S header. The context there is defined as all sentences below, that respond to this context, in the same, or one metalevel up or below. There might now occur the case, that two sentences are having similar topics on different metalevels, where its E might be 'the same' (on object level and from the other in an appropriate metalevel).
B) Indeed there is no check procedure to control the similarity of implemented topics by users at all. But it is not probable, that implemented contexts are the 'same', even if many will be similar! ...
7) are we able to formalize the contexts itself?
7.A) one approach would be to logically formalize the S header itself and thus be an input for resolving the question above.
7.B) we might rewrite a topic S as a new sentence type 'question', and check any SE in this context if its worth while, answering this question(s). Seems a good idea, but can we formalize and automate that?..Otherwise this will be of minor use !
7.C) If we for example assume that any sentence in a topic S answers according to an/the analyzed subject/verb combination of the topic S, how we again manage the homonym/synonym problems?...
Pro/cons of these approaches:
1) this will be the pw default apporach if there will be no better idea. And this will not resolve the problem of word and sentence similarities outside contexts (if they really exist !). Pro, as it is a possible, even if error prone way. Con, as it migth be too much work, to be a practical solution, if the amount of sentences gets a little bit bigger !
2) Here the problem is the using of tools, which should be applied for later computation steps. Are these giving the same (!) results for the sentence analysis like the (still) manual approach. We might argue, that even in many cases but not in all. So we again have to check manually, if the automated procedure is working for all sentences of our session. Will this be faster or easier?
3) and...
4) I will be happy for constructive input ...
5) it is presumably without discussion, that we should assume any evaluated set of sentences, natural language or formalized, to be processed the same in either version. So the remaining problem is how to deal with the possible differences. That propositional logic doesn't see into a sentence content is just an intermediate step in natural language formalization. In pure formal logics that must not be a problem, but in natural language formalization it is!
6) For the time beeing, we might/should ignore this problem; at least at it turns out to be a general one...shell we?
Live examples from pw should be implemented on an ongoing basis also here...
Contexts in natural language logical formalization:
In the last topic equality of formal sentences we allready discussed the problems when formalizing natural language into propositional logic. This topic reoccurs more seriously when upgrading to predicate logics.
Example: Given two natural langauge sentences inside the same context, one containing 'walk' the other 'go'. The rest of the sentence SE, and thus the propositional logic formula should be the same. How we, in this case, should handle the logical equality of both sentences as not in all contexts these two words can be handled the same way? And even in the same contexts their meaning can prevent us to handle them as beeing the same (as for example 'go' must not there exactly mean to walk. It might mean go 'by bus' or be undefined).
It looks that we cannot automatize this problem, which means we cannot automatize the generation of predicate logics from natural language at all. Are there workarounds?
1) we might guess some check procedure that throws warning in know cases, that are ambiguous. But from where we build such a knowledge base (KB)? Will this be reliable?
2) The KB must have such a design to having one main concept, a given context and all the synonym words, that can be substituted to the main word in the formula (potentially or best with a substitution remark, so the user knows). Will such a solution be practically usable and handleble when the KB grows big ?
3) The above mentioned idea was, to provide a SE context with a new sentence type of a question. This type might be able to be logically formalized a similar way as assertions. Then there could be a way to map the SETS to the context, i.e. if they answer the question! But how should this work in detail?..
4) we might just ignore the problem of logically identical predications (NB: go(x) and Truck(x) are logically both predications, but not in natural languages!). But then we must face the problem, that the consistency computation can be wrong, when logically identical sentences (in the same or not in the same context) are evaluated different and normally must produce an error.
5) Conceptually contexts should be mapped in either way to GBs in pw (some ideal formal concept areas). How, is open in detail...
As the predicate logics part is now in the design phase, there is no given solution/way to handle such issues for now.
Logic Ranges and the crossing of type borders:
In pw different logical types like WW, EW or DW have their own logics. These are just suggestions for 1BR, but should be defined by the user himself for 2BR.
The same for 2BR he/she defines logical ranges, i.e. which logics for the same type might hold for a defined set of sentences and which for another, of the same type.
In 1BR the user is able to produce and set relations in a session which cross the type border and also ranges of logics which get later splitt for 2BR.
It is an open question if pw will/should also transfer the 1BR relation into 2BR 40 (for consistemncy computation). But in either case, the potential mapping between different range logics or different types must be solved by the user for 2BR. That means for example, that a closure from an EW to WW might be forbidden but a closure between TS of a logic range 'a' might be combined in a formula with atomic sentences from a logic 'b'. These mappings are just the same as the 2BR logic ini files and are conceptually allready prepared (not implemented yet). But how should such mappings be?
With Rescher (same L) even the simpler case, different logics from the same type, seems to be only be handleble, when all WW values from the 'richer' logic are taken over to the common one, i.e. the mapping between both logics (indeed the discussion of different logical types mapping might not be available even in current logical papers...!).
1) Is there any another possibilty ?
2) How to handle intuitive cases, when such relations are forbidden in 2BR ? Have we to backtrace to the 1BR and let the user repair the relation (or disable it), because it is not valid to use in 2BR? Is that the only 'consistent' way to handle such cases or are there other options or interpretations?
3) choosen such mappings for 2BR, do it/they necessary has to be transfered in the same way into 3BR ? I suppose yes, but ...
4) Do such common logics produce new problems? They will, especially in the interpretation of mixed types/range logic formulas ! How to deal with that? (here we will need some examples....)
5) Shell we allow and can we handle combined formulas from more than two logics?
In this case the formal handling of the problem is presumably straight forward (mapping tables for two combined logics). But the work to make them really meaningfull is a big problem. At least the same, as just the problem to decide for a logic for a set of given sentences alone !
Next page
Back to main page