chencheng
驱动小牛
驱动小牛
  • 注册日期2003-06-27
  • 最后登录2007-08-30
  • 粉丝0
  • 关注0
  • 积分28分
  • 威望5点
  • 贡献值0点
  • 好评度1点
  • 原创分0分
  • 专家分0分
阅读:2718回复:7

请问fastio分发例程什么时候被调用?

楼主#
更多 发布于:2004-12-11 22:25
最近刚刚看filemon,太长,捋不清头绪,没找到什么时候调用它阿
请大虾指教

最新喜欢:

TOMG2004TOMG20...
好好学习,天天向上
chencheng
驱动小牛
驱动小牛
  • 注册日期2003-06-27
  • 最后登录2007-08-30
  • 粉丝0
  • 关注0
  • 积分28分
  • 威望5点
  • 贡献值0点
  • 好评度1点
  • 原创分0分
  • 专家分0分
沙发#
发布于:2004-12-12 14:14
最近刚刚看filemon,太长,捋不清头绪,没找到什么时候调用它阿
请大虾指教

什么时候调用它呢?谢谢,有分~~
好好学习,天天向上
fslife
驱动大牛
驱动大牛
  • 注册日期2004-06-07
  • 最后登录2016-01-09
  • 粉丝0
  • 关注0
  • 积分9分
  • 威望49点
  • 贡献值0点
  • 好评度20点
  • 原创分0分
  • 专家分0分
板凳#
发布于:2004-12-13 09:04
IRP和FastIo都是I/O管理器调用的。当FastIo调用不成功时(请求的内容不在cache中时),I/O管理器会生成一个IRP,并发给驱动堆栈的最上层。如果FastIo调用成功,则不会去执行IRP。至于详细的IRP和FastIo的介绍,可以参考OSR的文档,里面说得很清楚。
在交流中学习。。。
hellangel
驱动中牛
驱动中牛
  • 注册日期2004-02-16
  • 最后登录2016-04-19
  • 粉丝0
  • 关注0
  • 积分1002分
  • 威望236点
  • 贡献值0点
  • 好评度205点
  • 原创分1分
  • 专家分0分
地板#
发布于:2004-12-13 11:15
Why Fast I/O?
Let's recall how a typical file system buffered I/O (read/write) request is handled:
1. First, the I/O Manager creates an IRP describing the request.
2. This IRP is dispatched to the appropriate FSD entry point, where the driver
extracts the various parameters that define the I/O request (e.g., the buffer
pointer supplied by the caller and the amount of data requested) and validates
them.
3. The FSD acquires appropriate resources to provide synchronization across
concurrent I/O requests and checks whether the request is for buffered or
nonbuffered I/O.
4. Buffered I/O requests are sent by the FSD to the NT Cache Manager.
5. If required, the FSD initiates caching before dispatching the request to the NT
Cache Manager.
6. The NT Cache Manager attempts to transfer data to/from the system cache.
7. If a page fault is incurred by the NT Cache Manager, the request will recurse
back into the FSD read/write entry point as a paging I/O request.
You should note, that in order to resolve a page fault, the NT VMM issues a
paging I/O request to the I/O Manager, which creates a new IRP structure
(marked for noncached, paging I/O) and dispatches it to the FSD. The original
IRP is not used to perform the paging I/O.
8. The FSD receives the new IRP describing the paging I/O request and transfers
the requested byte range to/from secondary storage.
Lower-level disk drivers assist the FSD in this transfer.
There were two observations that NT designers made that will help explain the
evolution of the fast I/O method:
• Most user I/O requests are synchronous and blocking (i.e., the caller does not
mind waiting until the data transfer has been achieved).
• Most I/O requests to read/write data can be satisfied directly by transferring
data from/to the system cache.
春眠不觉晓,处处闻啼鸟。 夜来风雨声,花落知多少?
chencheng
驱动小牛
驱动小牛
  • 注册日期2003-06-27
  • 最后登录2007-08-30
  • 粉丝0
  • 关注0
  • 积分28分
  • 威望5点
  • 贡献值0点
  • 好评度1点
  • 原创分0分
  • 专家分0分
地下室#
发布于:2004-12-13 15:22
IRP和FastIo都是I/O管理器调用的。当FastIo调用不成功时(请求的内容不在cache中时),I/O管理器会生成一个IRP,并发给驱动堆栈的最上层。如果FastIo调用成功,则不会去执行IRP。至于详细的IRP和FastIo的介绍,可以参考OSR的文档,里面说得很清楚。

请问OSR的文档哪里有,我在站内没找到啊
再悄悄滴问一句:啥叫osr?
好好学习,天天向上
chencheng
驱动小牛
驱动小牛
  • 注册日期2003-06-27
  • 最后登录2007-08-30
  • 粉丝0
  • 关注0
  • 积分28分
  • 威望5点
  • 贡献值0点
  • 好评度1点
  • 原创分0分
  • 专家分0分
5楼#
发布于:2004-12-13 16:24
Why Fast I/O?
Let's recall how a typical file system buffered I/O (read/write) request is handled:
1. First, the I/O Manager creates an IRP describing the request.
2. This IRP is dispatched to the appropriate FSD entry point, where the driver
extracts the various parameters that define the I/O request (e.g., the buffer
pointer supplied by the caller and the amount of data requested) and validates
them.
3. The FSD acquires appropriate resources to provide synchronization across
concurrent I/O requests and checks whether the request is for buffered or
nonbuffered I/O.
4. Buffered I/O requests are sent by the FSD to the NT Cache Manager.
5. If required, the FSD initiates caching before dispatching the request to the NT
Cache Manager.
6. The NT Cache Manager attempts to transfer data to/from the system cache.
7. If a page fault is incurred by the NT Cache Manager, the request will recurse
back into the FSD read/write entry point as a paging I/O request.
You should note, that in order to resolve a page fault, the NT VMM issues a
paging I/O request to the I/O Manager, which creates a new IRP structure
(marked for noncached, paging I/O) and dispatches it to the FSD. The original
IRP is not used to perform the paging I/O.
8. The FSD receives the new IRP describing the paging I/O request and transfers
the requested byte range to/from secondary storage.
Lower-level disk drivers assist the FSD in this transfer.
There were two observations that NT designers made that will help explain the
evolution of the fast I/O method:
• Most user I/O requests are synchronous and blocking (i.e., the caller does not
mind waiting until the data transfer has been achieved).
• Most I/O requests to read/write data can be satisfied directly by transferring
data from/to the system cache.


我翻译一下上面兄弟的东东,看看对不对:

为什么用fast i/o?
让我们看一看一个典型的文件系统的读写i/o请求是如何处理的:
1:首先,i/o管理器产生一个描述此请求的irp
2:然后,此irp分发给适当的文件驱动的dispatch接口,文件驱动解包此irp(的参数),同时使之有效?
3:文件驱动获得适当的资源来提供给同步i/o请求?同时检查此irp是否为缓冲i/o请求(缓冲i/o是需要内存的i/o?)
4:如果是缓冲i/o请求,则文件驱动把她送给cache管理器
5:如果有必要,文件驱动在把她送给cache管理器之前,初始化一部分
cache空间?
6:cache管理器(根据此irp的描述)输入(输出)数据。
7:若cache管理器产生一个页错误,那么这个缓冲i/o请求将被作为一个页请求送回文件驱动(注意:为了解决一个页错误,nt虚拟机发出一个页i/o请求给i/o管理器,然后i/o管理器重新生成一个irp--描述了一个没在cache中的页i/o请求,把她分派给文件驱动)。
8:文件驱动收到那个新的描述(没在cache空间中的)页i/o请求的irp后,再根据此irp的要求分配一块cache空间。
下一层的驱动在此数据传输过程中起一定的辅助作用。
下面有两个nt设计者观察的结论可以帮助我们解释一下fast i/o的适用情况:
1:大多数可以同步的和需要大量数据传输的用户请求。
2:大多数直接从系统cache读写数据的i/o请求。

从上面来看,是不是说:任何一个fast i/o dispatch函数都对应一个irp? 那么我们在这个相应的irp dispatch函数中什么也不用做?还是负责send the Buffered I/O requests to the NT Cache Manager.?
我在那个filemon里面咋没看到涅?
好好学习,天天向上
hellangel
驱动中牛
驱动中牛
  • 注册日期2004-02-16
  • 最后登录2016-04-19
  • 粉丝0
  • 关注0
  • 积分1002分
  • 威望236点
  • 贡献值0点
  • 好评度205点
  • 原创分1分
  • 专家分0分
6楼#
发布于:2004-12-13 16:48
Once they had made the two observations listed, the NT I/O Manager developers
decided that the sequence of operations used in a typical I/O request could be
further streamlined to help achieve better performance. Certain operations
appeared to be redundant and could probably be discarded in order to make user
I/O processing more efficient. Specifically, the following steps seemed
unnecessary:
Creating an IRP structure to describe the original user request, especially if the IRP
was not required for reuse
Assuming that the request would typically be satisfied directly from the
system cache, it is apparent that the original IRP structure, with its multiple
stack locations and with all of the associated overhead in setting up the I/O
request packet, is not really required or fully utilized. It seems to make more
sense to dispense with this operation altogether and simply pass the I/O
request parameters directly to the layer that would handle the request.
Invoking the FSD
This may seem a little strange to you but a legitimate observation made by
the NT designers was that, for most synchronous cached requests, it seems to
be redundant to get the FSD involved at all in processing the I/O transfer.
After all, if all that an FSD did was route the request to the NT Cache
Manager, it seemed to be more efficient to have the I/O Manager directly
invoke the NT Cache Manager and bypass the FSD completely.
This can only be done if caching is initiated on the file stream, so that the
Cache Manager is prepared to handle the buffered I/O request.
Becoming Efficient: the Fast I/O Solution
Presumably, after pondering the observations listed here, NT I/O designers
decided that the new, more efficient sequence of steps in processing user I/O
requests should be as follows:
1. The I/O Manager receives the user request and checks if the operation is
synchronous.
2. If the user request is synchronous, the I/O Manager determines whether
caching has been initiated for the file object being used to request the I/O
operation.*
For asynchronous operations, the I/O Manager follows the normal method of
creating an IRP and invoking the driver dispatch routine to process the I/O
request.
3. If caching has been initiated for the file object as determined in Step 2, the
I/O Manager invokes the appropriate fast I/O entry point.
The important point to note here is that the I/O Manager assumes that the
fast I/O entry point must have been initialized if the FSD supports cached file
streams. If you install a debug version of the operating system, you will actually
see an assertion failure if the fast I/O function pointer is NULL.
Note that a pointer to the fast I/O dispatch table is obtained by the I/O
Manager from the FastloDispatch field in the driver object data structure.
4. The I/O Manager checks the return code from the fast I/O routine previously
invoked.
A TRUE return code value from the fast I/O dispatch routine indicates to the
I/O Manager that the request was successfully processed via the fast I/O path.
Note that the return code value TRUE does not indicate whether the request
succeeded or failed; all it does is indicate whether the request was processed
or not. The I/O Manager must examine the loStatus argument supplied to
the fast I/O routine to find out if the request succeeded or failed.
A return code of FALSE indicates that the request could not be processed via
the fast I/O path. The I/O Manager accepts this return code value and, in
response, simply reverts to the more traditional method of creating an IRP
and dispatching it to the FSD.
This point is very important for you to understand. The NT I/O subsystem
designers did not wish to force an FSD to have to support the fast I/O
method of obtaining data. Therefore, the I/O Manager allows the FSD to
return FALSE from a fast I/O routine invocation and simply reissues the
request using an IRP instead.
5. If the fast I/O routine returned success, the I/O Manager updates the
CurrentByteOffset field in the file object structure (since this is a
synchronous I/O operation) and returns the status code to the caller.
The advantage of using the new sequence of operations is that synchronous I/O
requests can be processed without having to incur the overhead of either building
an IRP structure (and the associated overhead of completion processing for the
IRP), or routing the request via the FSD dispatch entry point.
春眠不觉晓,处处闻啼鸟。 夜来风雨声,花落知多少?
yinyunan1210
驱动牛犊
驱动牛犊
  • 注册日期2010-11-19
  • 最后登录2011-03-03
  • 粉丝0
  • 关注0
  • 积分1分
  • 威望11点
  • 贡献值0点
  • 好评度0点
  • 原创分0分
  • 专家分0分
7楼#
发布于:2010-12-07 19:48
 
hello world!
游客

返回顶部