There are several other posts around this but none really seem to point to a solution. Here is my situation:
1. We've been running in production with SP1 of Standard Edition with no problems since October (4GB, Windows Server Standard, 4 fact tables, 20 million rows in largest fact)
2. I applied SP2 to development last week. While working in dev I saw the "Server: The operation has been cancelled." error for the first time while I was making some dimension changes (i'd never seen this error in SP2). I tried to process the cube several times and it always errored this way. I backed out my changes, processed the cube and all was well. I re-applied some of my changes and all was still well. I decided to upgrade prod since I thought the problem at this point was some bad configuration of the dimension.
3. We upgraded two servers in prod with the exact same hardware configuration, but with SP2 now and Standard edition (NOTE: The cube structure was not changed). We've been processing fine since Monday's load until last night. Last night one of the servers failed during the cube processing step with the "Server: The operation has been cancelled." error.
4. I've got the process running again after restarting the analysis services process...only thing I could think to try right now. Hopefully it works...i didn't see any posts with any type of guidance that might help
Has anyone else seen a problem where what worked in SP1 stopped working in SP2? Are most people running SP2 using the most recent patch rollup? Should I move to that? Any hope this issue was solved in one of the patch rollups?
Thanks.
Sounds like you might have a commitTimeout set, if you are seeing these operation cancelled errors from from the processing task. As far as I can tell the default setting for ForceCommitTimeout was changed in SP2, but I did not think there was one for the CommitTimeout. See this post for more information on these two settings http://geekswithblogs.net/darrengosbell/archive/2007/04/24/SSAS-Processing-ForceCommitTimeout-and-quotthe-operation-has-been-cancelledquot.aspx|||Thanks for the response. I had already tried that with no luck based on other posts I've seen. I saw one posting where the person said they have to restart the services every night. I just set EVERY timeout in the properties (by choosing advanced) to 0...I just had a Cancel Operation in dev again. I'll try it with all the timeouts set to 0 to see what hapepns.
Thanks.
|||This problem is getting really annoying. On one server it has never happened, on another it happens about every other day. If I restart the service then the process runs through. Note that in dev restarting the service doesn't always fix it...sometimes it just takes trying to run it multiple times. Something in SP2 broke SSAS processing...I never had a single problem in SP1.|||
I'm surprised no one else is seeing these issues with SP2. I had no issues with cube processing until I applied SP2. I just got a different error. Note that so far in every case if I just restart the service then the next cube load works fine. Is there any other tracing I can turn on to help identify what the issue is?
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,Analysis Services Processing Task,{F9170A49-2D23-4DA6-962C-CD55A57C191A},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1056964601,0x,Internal error: The operation terminated unsuccessfully.
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,master,{F9625461-CC09-4D2F-A3DE-6B64B9D0E230},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1056964601,0x,Internal error: The operation terminated unsuccessfully.
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,Analysis Services Processing Task,{F9170A49-2D23-4DA6-962C-CD55A57C191A},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1056767999,0x,Memory error: Allocation failure : Not enough storage is available to process this command. .
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,master,{F9625461-CC09-4D2F-A3DE-6B64B9D0E230},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1056767999,0x,Memory error: Allocation failure : Not enough storage is available to process this command. .
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,Analysis Services Processing Task,{F9170A49-2D23-4DA6-962C-CD55A57C191A},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1054932978,0x,Errors in the OLAP storage engine: An error occurred while processing the 'Tracking F' partition of the 'Tracking' measure group for the 'Warehouse' cube from the Cube database.
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,master,{F9625461-CC09-4D2F-A3DE-6B64B9D0E230},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1054932978,0x,Errors in the OLAP storage engine: An error occurred while processing the 'Tracking F' partition of the 'Tracking' measure group for the 'Warehouse' cube from the Cube database.
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,Analysis Services Processing Task,{F9170A49-2D23-4DA6-962C-CD55A57C191A},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1054932986,0x,Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
OnError,DWS569794SQL2,NT AUTHORITY\SYSTEM,master,{F9625461-CC09-4D2F-A3DE-6B64B9D0E230},{1E79FD64-2D1F-48A8-910B-30501E61BF0E},7/26/2007 7:08:48 AM,7/26/2007 7:08:48 AM,-1054932986,0x,Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
In the errors you have, it seems that this is the root cause:
Memory error: Allocation failure : Not enough storage is available to process this command
In case you have large dimensions, a common solution for out of memory errors is to reduce the parallelism for processing dimension attributes. There is the server property CoordinatorExecutionMode; the default is -4 which means that server will process 4 x NumberOfProcs dimension attributes in parallel (so if you have many attribute members, processing them in parallel, even for a single dimension, will lead to out of memory). Try setting it to 1.
Adrian Dumitrascu
|||I've been having the same issue with two individual servers both running SP2, a third server running SP1 will still build the exact same cube without fail.On both SP2 servers the cube simply hangs after reporting a couple of attribute key not found errors- even running a trace reveals no actual errors or further progress from this point and any attempts to stop the processing short of restarting the SSAS service fail.
I've tried changing ForceCommitTimeout and CoordinatorExecutionMode in vain.
|||Getting exactly the same problem. Applied SP2 and the cube starts processing but never completes with no obvious errors. Desperate for a solution.....
|||
Adrian, your response is not the answer. There are problems with SP2 that did not exist in SP1. I do have some updates.
1. I did try setting the flag to ignore the memory errors. I believe that this did make a difference.
2. I'm not completely sure about #1, because I also made a big cube change as well. A long time ago I had built a degenerate dimension as I needed to quickly implement some functionality. I was suspecting that this was what was causing the cube processing errors. I built a new dimension and updated by ETL to create a standard dimension to replace the degenerate dimension. Once this was done (crossing my fingers) the cube has built without any issue.
For the other people posting, are you using degenerate dimensions? Have you tried setting the MemoryLimitErrorEnabled to false?
Microsoft readers, please note the common thread though, these issues did not exist in SP1.
|||I believe this error is due to lack of memory, either from lack of memory on your machine, or (more likely) from the 32-bit platform itself. To the best of my knowledge the 32-bit versions of analysis server can only utilize a maximum of 3 GB of memory, which includes page file (virtual memory) usage. In my own experience, these errors occur when I am processing a large cube or dimension and the memory usage (combined RAM and virtual memory) reaches about 2.5GB. I think setting the MemoryLimitErrorEnabled to false helps greatly, as does setting processing operations to run in parallel, but only one process at a time (for some reason this works much better than "sequential"). I think the only permanent solution is to upgrade the server to 64-bit, and that is what we are doing know with our server.
Restarting the service clears out the cache, which makes more application memory available, and likewise would make the process more likely to complete successfully.
No comments:
Post a Comment