1. Link-Time Demand and Reflection
When you demand a security permission at link time using the SecurityAction.LinkDemand
value for the security action, the demand applies only to early-bound
code—that is, code that uses the compile time (or actually, the JIT
compilation-time) linker. Malicious code can use reflection with
late-binding invocation to avoid the link-time demand. To close this
potential security hole, when a method is invoked using late binding the
.NET reflection libraries reflect the method, looking for security
permission attributes with link-time demands. If any such attributes are
found, the reflection libraries programmatically demand these
permissions, triggering a stack walk that verifies whether a caller has
circumvented the demand for the permissions. As a result, code that
works with a certain call chain that uses early binding may not work
when one of the callers uses late binding. This is because the
reflection libraries convert a link-time demand (which affects only the
immediate caller) to a full stack walk that affects all callers. This
behavior is yet another reason to avoid late-binding invocation.
2. Link-Time Demand and Inheritance
Consider a subclass that
uses a link-time security demand while overriding a base-class method.
The subclass demand is security-tight only if the base class demands the
same permission at link time. If you develop a class hierarchy that
requires security, it's best to define an interface that the class
hierarchy implements and demand link-time permission checks at the
interface level. This provides the demand for every level in the class
hierarchy.
3. Strongly Named Assemblies and Full Trust
A strongly named
assembly can easily be shared by multiple applications whose components
come from potentially untrusted origins. Imagine a component library
vendor that produces an assembly and installs it in the GAC. That
assembly is now available for use by any unknown, malicious client. To
prevent even the potential for abuse, by default a .NET strongly named
assembly can be used only by client assemblies granted the FullTrust
permission set. This ensures that partially trusted clients can't use
assemblies that are not properly secured. .NET enforces this default by
placing a link-time demand for the FullTrust permission set on every
public or protected method on every public class in the assembly. The
JIT compiler does this automatically when it detects that the assembly
has a strong name. For example, if a strong name is specified, the JIT
compiler converts this method definition:
public void SomeMethod( )
{}
to this:
[PermissionSet(SecurityAction.LinkDemand,Name = "FullTrust")]
public void SomeMethod( )
{}
A partially trusted
assembly can still implement interfaces defined in a strongly named
assembly, because interfaces have no implementations to protect and the
compiler doesn't change their definitions. |
|
This extra precaution
can be a liability, especially if you intend for your assembly to be
used by semi-trusted assemblies or to run in a partially trusted
environment. For example, if the client assembly is a partially trusted
ClickOnce application or if the client is coming from the local
intranet, it won't be able to access your code. If you want to allow
partially trusted callers to use your assembly, you can apply the
attribute AllowPartiallyTrustedCallersAttribute to the assembly:
[assembly:AllowPartiallyTrustedCallers]
This instructs the compiler not to add the link-time demand for full trust to the public entry points.
4. Unsafe Code
C# (and potentially future .NET languages) allows you to use unsafe code to directly manipulate memory using pointers. Such C# code is called unsafe
because it lets go of most of the safety of .NET memory management,
such as bound-safe arrays. However, unsafe code is still managed code,
because it runs in the CLR and it manipulates the managed heap. This can
present a security breach, because objects from multiple assemblies
(with potentially different security permissions) share the same heap. A
malicious assembly may not have permission to access assemblies that
are more privileged, but it can potentially use unsafe code to traverse
the managed heap and read or modify the state of objects. Worse yet,
even if you try to isolate questionable assemblies in one app domain and
put trusted assemblies in another, it will be to no avail. Because all app domains in the same physical process share the
same managed heap, a malicious component could use unsafe code to access
the other app domains. Clearly, only trusted assemblies should be
granted permission to use unsafe code. .NET doesn't have an unsafe code
permission, but it does have a security permission with the right to
skip verification. Because unsafe code is unverifiable, you can use this
permission to grant, in effect, permission for unsafe code. Note that
the FullTrust permission set grants that permission, as does the
dedicated SkipVerification permission set.
5. Security and Remote Calls
As long as the client
and the object share the same physical process, .NET can enforce code
access permission checks using stack walks, even when the call is made
across app domains. This is possible because the cross-app domain
remoting
channel uses the original client thread to invoke the call, so the stack
walk can detect callers without the required permissions. However, in a
distributed application that spans processes and machines, multiple
physical threads are involved every time the call flows to another
location. Because each thread has its own stack, the stack-walk strategy
as a mechanism for enforcing access permissions doesn't work when
crossing the process boundary. Link-time permission demands are of no
use either, because the component is linked against the trusted host,
not the remote client. In addition, each machine may well have a
different code access policy, and what is allowed on one machine may be
forbidden on another.
In order to authenticate and authorize remote calls, you need the security call context—
the caller's identity and credentials—to flow across process and
machine boundaries. .NET 2.0 introduced support for propagating the
security call context for remoting, and even support for encrypted
channels. That said, if you need to secure remote calls on your
intranet, I recommend using Enterprise Services instead of remoting.
Enterprise Services offer a richer security model than remoting (such as
support for audit trails and granular role-based security), and more
significantly, applications that require secure remote calls typically
require other aspects that Enterprise Services support natively, such as
distributed transactions and disconnected calls. Remoting should be
used when you need extensibility, not when you need Enterprise-level
services. For that, use Enterprise Services.
|
6. Serialization
Imagine a class
containing sensitive information that needs to interact with partially
trusted clients. If a malicious client could provide its own
serialization
formatters, it would be able to gain access to the sensitive information
or deserialize the class with bogus state. To prevent abuse by such
serialization clients, a class can demand at link time that its clients
have the security permission to provide a serialization formatter that
uses the attribute SecurityPermissionAttribute with the SecurityPermissionFlag.SerializationFormatter flag:
[SecurityPermission(SecurityAction.LinkDemand,
Flags = SecurityPermissionFlag.SerializationFormatter)]
[Serializable]
public class MyClass
{...}
If the class has sensitive state information, you may want to consider
using custom serialization to encrypt and decrypt the state during
serialization and deserialization. The problem with demanding the
serialization formatter permission at the class level is that it
precludes clients that don't have that permission and don't even need to
serialize the class from using the class at all. In such cases, it's
better to provide custom serialization and demand the permission only on
the deserialization constructor and GetObjectData( ):
[Serializable]
public class MyClass : ISerializable
{
public MyClass( )
{}
[SecurityPermission(SecurityAction.LinkDemand,
Flags = SecurityPermissionFlag.SerializationFormatter)]
public void GetObjectData(SerializationInfo info,StreamingContext context)
{...}
[SecurityPermission(SecurityAction.LinkDemand,
Flags = SecurityPermissionFlag.SerializationFormatter)]
protected MyClass(SerializationInfo info,StreamingContext context)
{...}
}
If all you need are the
standard .NET formatters, there is a different solution altogether to
the problem of malicious serialization clients. Use the attribute StrongNameIdentityPermissionAttribute to demand at link time that only Microsoft-provided assemblies serialize and deserialize your class:
public static class PublicKeys
{
public const string Microsoft = "0024000004800000940000000602000000240000"+
"52534131000400000100010007D1FA57C4AED9F0"+
"A32E84AA0FAEFD0DE9E8FD6AEC8F87FB03766C83"+
"4C99921EB23BE79AD9D5DCC1DD9AD23613210290"+
"0B723CF980957FC4E177108FC607774F29E8320E"+
"92EA05ECE4E821C0A5EFE8F1645C4C0C93C1AB99"+
"285D622CAA652C1DFAD63D745D6F2DE5F17E5EAF"+
"0FC4963D261C8A12436518206DC093344D5AD293";
}
[Serializable]
public class MyClass : ISerializable
{
public MyClass( )
{}
[StrongNameIdentityPermission(SecurityAction.LinkDemand,
PublicKey = PublicKeys.Microsoft)]
public void GetObjectData(SerializationInfo info,StreamingContext context)
{...}
[StrongNameIdentityPermission(SecurityAction.LinkDemand,
PublicKey = PublicKeys.Microsoft)]
protected MyClass(SerializationInfo info,StreamingContext context)
{...}
}
If you wish to allow
either Microsoft or clients with the serialization formatter permission
to serialize your class, use a link-time demand choice on both
permissions:
[StrongNameIdentityPermission(SecurityAction.LinkDemandChoice,
PublicKey = PublicKeys.Microsoft)]
[SecurityPermission(SecurityAction.LinkDemandChoice,
Flags = SecurityPermissionFlag.SerializationFormatter)]
public void GetObjectData(SerializationInfo info,StreamingContext context)
{...}
[StrongNameIdentityPermission(SecurityAction.LinkDemandChoice,
PublicKey = PublicKeys.Microsoft)]
[SecurityPermission(SecurityAction.LinkDemandChoice,
Flags = SecurityPermissionFlag.SerializationFormatter)]
protected MyClass(SerializationInfo info,StreamingContext context)
{...}
7. Transactions
An application that uses transactions
managed by the Light-Weight Transaction Manager (LTM) can consume
resources from at most a single durable recourse such as SQL Server
2005. This, however, is not the case with a distributed transaction,
which can interact with multiple resources, potentially across the
network. This opens the way for both denial-of-service attacks by
malicious code, or even just accidental excessive use of such resources.
To prevent that, the System.Transactions
security permission. Whenever a transaction is promoted from an LTM to
OleTx transaction, the code that triggered the promotion will be
verified to have the DistributedTransaction permission.
Verification of the security permission is done like any other
code-access security verification, using a stack walk, demanding from
every caller up the stack the DistributedTransaction permission. Note
again that the security demand will affect the code that triggered the
promotion, not necessarily the code that created the LTM transaction in
the first place (although that can certainly be the case if they are on
the same call stack). namespace defines the DistributedTransaction
This permission demand is of
particular importance for Smart Client applications deployed in a
partial trust environment, such as the LocalInternet zone, that want to
perform transactional work against multiple resources. None of the
predefined partial trust zones grant the DistributedTransaction
permission. You will have to grant that permission using a custom code
group, or manually list that permission in the application's ClickOnce
deployment manifest. Another solution altogether is to introduce a
middle tier between the client application and the resources, and have
the middle tier encapsulate accessing these resources transitionally.