S 1 is p.
S2 is p.
…
Si is p.
(S 1, S2, …Si are all class S molecules)
All s are p.
Enumeration induction only depends on the number of enumerated cases, so its conclusion is not reliable, and once it encounters counterexamples, the conclusion will be overturned. However, the enumeration induction method still has a certain role, and the conclusion drawn by the enumeration induction method can be used as a hypothesis for further research. F. Bacon's "three-table method" and "exclusion method" combined induction, J.S. Mill's coincidence method and difference method for finding causality (see Mill's five methods for finding causality) are all exclusion induction. They * * * have the same characteristics: they arrange cases or experiments selectively according to the research objects, and then get more reliable conclusions by excluding some assumptions through comparison. The following two kinds of elimination and induction methods are improvements of Mill's method with conditional clauses. ① If we want to explore the necessary conditions of the phenomenon A studied and popularize Mill's "seeking common ground method", we can first compare various occasions where A appears. If it is found that there is only one * * * in the antecedents of various occasions where A appears, then B is the necessary condition of A; If there is more than one situation, then A may have several necessary conditions. Obviously, the situation C that did not appear in these occasions cannot be a necessary condition for the appearance of A. If there is no * * * in the antecedent, it does not mean that A does not have the necessary conditions. Here, the necessary condition of a may be the disjunction of two or more antecedents. For example, C and D are not the same situation in various occasions, and the necessary condition for A to appear may be the appearance of "C or D". You can make a further analysis of "C or D". The above method is an extension of Mill fitting method.
(2) Assuming that we want to explore the sufficient conditions of phenomenon A, we can choose two occasions according to Mill's improved "difference method", that is, positive occasions and negative occasions. On positive occasions, A appears; On the other hand, A did not appear. You can choose several negative occasions. Then compare several occasions. If only one antecedent case B belongs to a positive case, but not to any negative case, then B is a sufficient condition for A; If two or more antecedents belong to all positive situations, but not to any negative situation, then A may have several sufficient conditions. Obviously, any antecedent situation that appears in various negative situations cannot be a sufficient condition for A. If there is no antecedent situation that makes the affirmative situation different from any negative situation, it does not mean that A does not have sufficient conditions. Because the sufficient condition of a may be the conjunction of two or more situations. For example, C and D are two of the two situations, and the appearance of "C and D" (but not only one of them) may be a sufficient condition for the appearance of A. The above method is a generalization of Mill's "difference method".
In the application of elimination induction, sufficient conditions and necessary conditions can be defined each other. The appearance of A is a necessary condition for the appearance of B, and it is a sufficient condition for the appearance of B if and only if A does not appear. For example, fertilization is a necessary condition for a good harvest, and no fertilization is a sufficient condition for a good harvest. When the conditions of the studied phenomenon are determined by elimination and induction, this relationship can be used to combine ① and ② methods. Probability logic is different from probability statistics in mathematics, and the development of the latter is due to the needs of mathematics and experimental science; Probabilistic logic is due to the development of mathematical logic and the need to study inductive logic. Probability logic began to form different systems in the 1920s, and Carnap made important contributions to its development. Carnap divided inductive reasoning into five types: ① Direct reasoning. This is reasoning from the population to the sample. The so-called population refers to the types of things investigated, and the sample is a subclass composed of several individuals randomly selected from the population. The premise of direct reasoning is the frequency of a certain property m in the population, and the conclusion is that m has the same frequency in a sample.
② Inference of prediction. This is the reasoning from one sample to another different sample.
③ Analogical reasoning. That is, reasoning from one individual to another according to the similarity between two individuals.
④ Reverse reasoning. This is a reasoning from a sample to the whole.
⑤ General reasoning. This is reasoning from samples to assumptions with universal forms.
Carnap believes that inductive logic is a theory about inductive reasoning, based on the concept of probability, and inductive logic is probabilistic logic. Probability is the relationship between a set of propositions, that is, some given evidence, and another proposition, that is, hypothesis, that is, the degree of corroboration of evidence. Carnap called it probability 1 to distinguish it from relative frequency, that is, probability 2. Let the evidence be E, H and the degree of corroboration q=c(h, E). C is called corroboration function or c function. Carnap constructed a probabilistic logic system to study the degree of certainty by using the methods of mathematical logic and semantics, and processed the probability of five kinds of inductive reasoning he proposed. After the development of probabilistic logic, some scholars, such as P.J. Cohen of the United States, began to use modal logic as a tool to deal with inductive reasoning from the middle of the 20th century. Cohen pointed out that Carnap's probabilistic logic faced many difficulties, and inductive reasoning should not be dealt with by probability. The research object of his inductive logic is the support degree of evidence E to hypothesis H, which is expressed by s(h, e), and S is called support function. In his view, support can be divided into different levels. Different levels of support are the inevitability of different levels of proof and commitment. A proven theory is to reach a higher level of inevitability from a lower level. Different levels of support are the research objects of generalized modal logic. Cohen proved that a generalized modal logic system meets all the requirements of his support function.
Modern inductive logic is in a new stage of in-depth research. It is closely combined with some branches of modern formal logic, that is, mathematical logic, information theory, fuzzy mathematics and artificial intelligence, and constantly explores new fields with these disciplines as tools.