104:(TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".
165:) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain
419:
on information from other information channels. The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived—higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
117:
support – even encourage – an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
415:
psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.
60:. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.
158:. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.)
188:
they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or
137:
users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the
418:
Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but
410:
In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received
35:
The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can
120:
The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and
Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S.
43:
A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified (U), confidential (C), secret (S), top secret (TS), and beyond. These
116:
countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and
429:
Federal
Software Group has suggested that "trust points" provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise
150:, in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents,
414:
Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or
200:
An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed
336:
The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional
301:
about the behavior of people or objects prior to authorizing access to system resources. For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications,
107:
TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introduced
478:
Lunt, Teresa & Denning, Dorothy & R. Schell, Roger & Heckman, Mark & R. Shockley, William. (1990). The SeaView
Security Model.. IEEE Trans. Software Eng.. 16. 593-607. 10.1109/SECPRI.1988.8114.
224:
The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a
273:
creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.
640:
Daly, Christopher. (2004). A Trust
Framework for the DoD Network-Centric Enterprise Services (NCES) Environment, IBM Corp., 2004. (Request from the IEEE Computer Society's
318:
is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems
32:. This is equivalent to saying that a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce).
93:(TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
430:
services environment, such a notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.
90:
71:", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is
142:
was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a
400:"Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel"
154:
TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated
Denning's concept into their landmark MITRE technical report—entitled,
64:
411:
by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
609:
550:
422:
An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?".
626:
Quality-control of information: On the concept of accuracy of information in data banks and in management information systems
161:
The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it
315:
690:
202:
495:, Homeland Security - Trends and Controversies, IEEE Intelligent Systems, Vol. 20 No. 5, pp. 80-83 (Sept./Oct. 2005).
325:
480:
189:
destroy it, is addressed by mathematically affine models; the most important of which is named for its creator,
166:
86:
53:
493:
The
Trusted Systems Problem: Security Envelopes, Statistical Threat Analysis, and the Presumption of Innocence
122:
270:
210:
439:
147:
101:
57:
644:
36:
trust the platform to be unmodified from the expected, and whether or not those programs are innocent or
521:
349:
model based on authorization, preemption, and general social compliance through ubiquitous preventative
319:
238:
234:
230:
143:
226:
17:
454:
218:
601:
393:
258:
254:
194:
576:
332:
is used to ensure that behavior within systems complies with expected or authorized parameters.
310:
systems in financial and anti-fraud applications. In general, they include any system in which
685:
605:
572:
546:
459:
444:
303:
286:
282:
139:
68:
680:
342:
290:
81:
small enough to be subject to independent testing, the completeness of which can be assured.
21:
184:
The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls,
648:
516:
504:
361:
346:
338:
109:
29:
492:
213:(supported by UNIX since the late 1960s and – in a more flexible and powerful form – by
369:
294:
96:
The dedication of significant system engineering toward minimizing the complexity (not
625:
129:(passive repositories or destinations for data such as files, disks, or printers) and
674:
621:
526:
381:
377:
174:
491:
The concept of trusted systems described here is discussed in
Taipale, K.A. (2005).
56:(MAC) labels, and as such, it is often assumed that they can be used for processing
449:
350:
329:
307:
209:
under the direct control of system users. Familiar protection mechanisms such as
246:
589:
298:
261:). Each element of the aforementioned engenders one or more model operations.
190:
112:(CC), which derive from a blend of technically mature standards from various
568:
554:
365:
357:
28:
is one that is relied upon to a specified extent to enforce a specified
373:
214:
197:
and
Shockley and Schell's program integrity model, "The SeaView Model"
178:
37:
545:, in Digital Certificates: Applied Internet Security. Addison-Wesley,
52:
A subset of trusted systems ("Division B" and "Division A") implement
156:
Secure
Computer System: Unified Exposition and Multics Interpretation
372:. These developments have led to general concerns about individual
233:(a set of "operations" that correspond to state transitions), and a
628:.The University of Stockholm and The Royal Institute of Technology.
555:
Toward Real-World Models of Trust: Reliance on
Received Information
664:
602:
Trust in business relations: Economic logic or social interaction?
345:
based on accountability for deviant actions after they occur to a
250:
125:, in which a trustworthy computer system is modeled in terms of
121:
Army Electronic Systems Command (Fort Hanscom, MA), devised the
113:
571:, The COOK Report on Internet, Volume X, No. 10, January 2002,
641:
426:
242:
133:(active entities that cause information to flow among objects
40:
or whether they execute tasks that are undesired by the user.
590:
John D. Electronic Legal Records: Pretty Good Authentication?
564:
562:
44:
also enforce the policies of no read-up and no write-down.
384:
debate about appropriate social governance methodologies.
356:
In this emergent model, "security" is not geared towards
636:
634:
569:Trust as Qualified Reliance on Information, Part I
306:" initiatives in technical systems security, and
181:are specializations of the Trojan horse concept).
398:
537:
535:
169:, one of the most general classes of attacks (
8:
364:through surveillance, information exchange,
297:policy, trusted systems provide conditional
91:Trusted Computer System Evaluation Criteria
67:-style trusted systems is the notion of a "
322:to control behavior within the system); or
48:Trusted systems in classified information
353:and control through system constraints.
471:
396:are based on the following definition:
221:(ACLs) are familiar examples of DACs.
388:Trusted systems in information theory
193:. Other integrity models include the
7:
265:Trusted systems in trusted computing
237:, DTLS (entails a user-perceptible
235:descriptive top-level specification
665:Global Information Society Project
541:Feghhi, J. and P. Williams (1998)
392:Trusted systems in the context of
277:Trusted systems in policy analysis
14:
507:, On Crimes and Punishment (1764)
229:(FSM) with state criteria, state
138:same time, Dorothy Denning at
1:
203:discretionary access controls
707:
667:– a joint research project
308:credit or identity scoring
65:U.S. Department of Defense
63:Central to the concept of
588:Gregory, John D. (1997).
217:since earlier still) and
173:, the popularly reported
100:, as often cited) of the
314:probabilistic threat or
87:National Security Agency
54:mandatory access control
271:Trusted Computing Group
146:, characterizable as a
440:Accuracy and precision
408:
231:transition constraints
148:directed acyclic graph
102:trusted computing base
85:According to the U.S.
58:classified information
522:Discipline and Punish
368:, communication, and
144:partially ordered set
227:finite state machine
18:security engineering
691:Computational trust
600:Huemer, L. (1998).
455:Information quality
380:, and to a broader
219:access control list
123:Bell–LaPadula model
647:2011-07-26 at the
529:, tr., 1977, 1995)
394:information theory
326:deviation analysis
281:In the context of
195:Clark-Wilson model
460:Trusted Computing
445:Computer security
304:trusted computing
287:homeland security
140:Purdue University
69:reference monitor
698:
652:
638:
629:
619:
613:
598:
592:
586:
580:
566:
557:
539:
530:
514:
508:
502:
496:
489:
483:
476:
406:
343:criminal justice
22:computer science
20:subspecialty of
706:
705:
701:
700:
699:
697:
696:
695:
671:
670:
661:
656:
655:
649:Wayback Machine
639:
632:
620:
616:
599:
595:
587:
583:
567:
560:
540:
533:
517:Michel Foucault
515:
511:
505:Cesare Beccaria
503:
499:
490:
486:
477:
473:
468:
436:
407:
404:
390:
362:risk management
291:law enforcement
279:
267:
211:permission bits
110:Common Criteria
50:
30:security policy
12:
11:
5:
704:
702:
694:
693:
688:
683:
673:
672:
669:
668:
660:
659:External links
657:
654:
653:
630:
614:
593:
581:
558:
531:
509:
497:
484:
470:
469:
467:
464:
463:
462:
457:
452:
447:
442:
435:
432:
402:
389:
386:
370:classification
334:
333:
323:
295:social control
278:
275:
266:
263:
83:
82:
79:
78:always invoked
76:
49:
46:
26:trusted system
13:
10:
9:
6:
4:
3:
2:
703:
692:
689:
687:
684:
682:
679:
678:
676:
666:
663:
662:
658:
650:
646:
643:
637:
635:
631:
627:
623:
618:
615:
611:
610:91-89140-02-8
607:
604:Umeå: Boréa.
603:
597:
594:
591:
585:
582:
578:
574:
570:
565:
563:
559:
556:
552:
551:0-201-30980-7
548:
544:
538:
536:
532:
528:
527:Alan Sheridan
524:
523:
518:
513:
510:
506:
501:
498:
494:
488:
485:
482:
475:
472:
465:
461:
458:
456:
453:
451:
448:
446:
443:
441:
438:
437:
433:
431:
428:
423:
420:
416:
412:
401:
397:
395:
387:
385:
383:
382:philosophical
379:
378:civil liberty
375:
371:
367:
363:
359:
354:
352:
348:
344:
340:
331:
327:
324:
321:
317:
316:risk analysis
313:
312:
311:
309:
305:
300:
296:
292:
288:
284:
276:
274:
272:
264:
262:
260:
256:
252:
248:
244:
240:
236:
232:
228:
222:
220:
216:
212:
208:
204:
198:
196:
192:
187:
182:
180:
176:
172:
168:
167:Trojan horses
164:
159:
157:
153:
149:
145:
141:
136:
132:
128:
124:
118:
115:
111:
105:
103:
99:
94:
92:
88:
80:
77:
74:
73:
72:
70:
66:
61:
59:
55:
47:
45:
41:
39:
33:
31:
27:
23:
19:
617:
596:
584:
543:Trust Points
542:
520:
512:
500:
487:
474:
450:Data quality
424:
421:
417:
413:
409:
399:
391:
355:
351:surveillance
335:
330:surveillance
280:
268:
255:system exits
247:system calls
223:
206:
199:
185:
183:
170:
162:
160:
155:
151:
134:
130:
126:
119:
106:
97:
95:
84:
75:tamper-proof
62:
51:
42:
34:
25:
15:
347:Foucauldian
328:or systems
320:constraints
245:, a set of
241:such as an
675:Categories
622:Ivanov, K.
466:References
299:prediction
259:mainframes
191:K. J. Biba
577:1071-6327
341:model of
339:Beccarian
239:interface
163:dominates
38:malicious
686:Security
645:Archived
624:(1972).
481:(Source)
434:See also
405:Ed Gerck
403:—
366:auditing
358:policing
283:national
205:(DACs),
131:subjects
89:'s 1983
681:Systems
525:(1975,
374:privacy
360:but to
215:Multics
179:viruses
127:objects
16:In the
608:
575:
549:
642:ISSAA
293:, or
175:worms
171:sciz.
606:ISBN
573:ISSN
547:ISBN
425:The
376:and
269:The
251:UNIX
186:i.e.
177:and
152:e.g.
135:e.g.
114:NATO
98:size
24:, a
427:IBM
285:or
257:in
253:or
249:in
243:API
207:are
677::
651:).
633:^
561:^
553:;
534:^
519:,
289:,
612:.
579:.
302:"
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.