Leaked source code of windows server 2003
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

138 lines
5.6 KiB

  1. CEssNamespace Locks
  2. ===================
  3. There are two namespace locks : level1 and level2.
  4. Level1 is supposed to be a lightweight lock and guards members like
  5. the current state of the namespace and members that must be accessed when
  6. signaling events ( such as the deferred event queue ). Because of the last
  7. point, very little should be done while holding this lock.
  8. Level2 is the heavy handed lock and guards all of the changes to the
  9. provider cache and to the subscription objects ( e.g. binding.h ).
  10. Level2 must always be aquired before level1 if both are needed.
  11. Level2 or Level1 can never be held when making calls to the providers.
  12. This is problematic because level2 is aquired at the top level and the calls
  13. to the providers occur down deep in the provider cache. To handle this, all
  14. calls to providers are scheduled on a Postponed list associatated with the
  15. threads. After the level2 is released, then the provider calls can be made
  16. and the postponed operations are executed. Note that these calls must occur
  17. on the same control path as the one that scheduled them so they cannot be
  18. asynchronously executed.
  19. Level2 can never be held when signaling an event. This is because some
  20. subscriptions can be synchronous and the action taken on notification could
  21. be to call back into ess ( say to cancel a subscription or something ). The
  22. other reason is that it is possible to aquire the level2 when holding a
  23. filter proxy lock and we must avoid the reverse scenario to avoid a deadlock.
  24. ESS Sink Lock
  25. ==================
  26. This is a shared lock whose only purpose is to facilitate shutdown of ESS.
  27. Since all public access to ESS is performed through the esssink, this is
  28. where the ess shutdown check is.
  29. Each entry point except shutdown() will ...
  30. 1 ) enter the esssink lock with shared access,
  31. 2 ) check to see if shutdown has been performed, if so then goto (4)
  32. 3 ) perform the op
  33. 4 ) then release it.
  34. Shutdown will ..
  35. 1 ) aquires the lock for exclusive access
  36. 2 ) set shutdown
  37. 3 ) release lock
  38. Since the shared lock handles writer starvation, the shutdown op waits for
  39. all current ops to finish, but does not allow any new ones to procede until
  40. it has executed.
  41. Filter Proxy Lock
  42. ==================
  43. PURPOSE : To synchronize the signaling of an event through the proxy with
  44. disconnecting the proxy. When disconneting the proxy from the stub, we want
  45. to ensure that all calls currently executing through that proxy are complete.
  46. ( We could have used CoDisconnectObject on the stub for the same functionality,
  47. but this would only work when the proxy was in a separate process/apartment
  48. from the stub which is not always the case ).
  49. TYPE : This is a CWbemCriticalSection ( but should be a shared lock so that
  50. the signaling threads requests shared access and the Disconnect() thread was
  51. exclusive access. )
  52. RULES :
  53. Must be aquired before Namespace Level2 Lock. Reason is that the lock MUST
  54. be held across the signaling of an event, for reasons described above. Since
  55. we support synchronous delivery, there is nothing stopping a consumer from
  56. turning around and issuing a request that will grab the level2 lock in the
  57. same namespace. Because of this, we must always ensure that the proxy lock
  58. is aquired BEFORE acquiring the level2 namespace lock.
  59. Provider Exec Line
  60. =========================
  61. PURPOSE :
  62. This is a different sort of sync mechanism. Its really a queue more than
  63. a lock. It allows the user to place requests in a queue and then to execute
  64. them later. The major difference between this and a normal queue is in the
  65. way that requests are fetched from the queue and executed. The exec line
  66. allows multiple threads to fetch requests from the queue and execute them
  67. while still preserving the logical ordering of the requests in the queue.
  68. For example, lets say that there are the following requests placed in the
  69. queue ...
  70. A, B, C <-- rear
  71. Lets say that T1 placed A and B in the queue and T2 placed C in the queue.
  72. Then both threads try to service their requests. This structure would ensure
  73. that A and B completed before C could execute.
  74. The reason for such a sync structure is that we do not make calls to a provider
  75. while holding the namespace lock. So we 'postpone' the requests to the
  76. provider. Later, after releasing the namespace lock, we execute the
  77. 'postponed' operations. This structure ensures that execution of those
  78. postponed operations occurs in the same logical order as the namespace
  79. operations.
  80. e.g. If Namespace Op N1 causes Postponed Operation P1. And N2
  81. causes Postponed Operation P2. Then P1 will be executed before P2 even if
  82. the thread handling N2 tries to execute its postponed operations first.
  83. This is the following protocol used with this sync mechanism
  84. 1 ) Get In Line - this reserves a place in the line, called a Turn. A turn
  85. is associated with a postponed request. The turn is returned from this
  86. step.
  87. 2 ) Wait For Turn - once obtained, the request can be executed.
  88. 3 ) End Turn - after the request is executed, the turn is ended thereby
  89. allowing the next turn to execute.
  90. Each provider record has an associated exec line.
  91. RULES : It is illegal to obtain a proxy lock when holding one or more turns
  92. in any exec line. The reason is that is possible that when holding the
  93. proxy lock that we could wait for a turn. ( Just as it is possible when
  94. holding a proxy lock to obtain the namespace lock ). For this reason, if
  95. we allowed waiting for the proxy lock while holding a turn then we'd have
  96. a deadlock issue.